**I**t all started in Jim Berger’s basement, drinking with the uttermost reverence an otherworldly Turley Zinfandel during the great party Ann and Jim Berger hosted for the O’Bayes’13 workshop in Duke. I then mentioned to Angela Bitto and Alexandra Posekany, from WU Wien, that I was going to be in Austria next September for a seminar in Linz, at the Johannes Kepler Universität, and, as it happened to take place the day before BAYSM ’14, the second conference of the young Bayesian statisticians, in connection with the j-ISBA section, they most kindly invited me to the meeting! As a senior Bayesian, most obviously! This is quite exciting, all the more because I never visited Vienna before. (Contrary to other parts of Austria, like the Großglockner, where I briefly met Peter Habeler. *Trivia: the cover picture of the ‘Og is actually taken from the Großglockner.*)

## Archive for O-Bayes 2013

## BAYSM ’14 im Wien, Sep. 18-19

Posted in Kids, Mountains, pictures, Statistics, Travel, University life, Wines with tags Austria, BAYSM, Duke, Großglockner, j-ISBA, Johannes Kepler Universität, Linz, O-Bayes 2013, Studlgrat, Wien, WU Wien on April 6, 2014 by xi'an## posterior predictive p-values

Posted in Books, Statistics, Travel, University life with tags ABC, Bayesian data analysis, calibration, Duke University, exploratory data analysis, goodness of fit, model checking, O-Bayes 2013, p-values, posterior predictive on February 4, 2014 by xi'an* Bayesian Data Analysis* advocates in Chapter 6 using posterior predictive checks as a way of evaluating the fit of a potential model to the observed data. There is a no-nonsense feeling to it:

“If the model fits, then replicated data generated under the model should look similar to observed data. To put it another way, the observed data should look plausible under the posterior predictive distribution.”

**A**nd it aims at providing an answer to the frustrating *(frustrating to me, at least)* issue of Bayesian goodness-of-fit tests. There are however issues with the implementation, from deciding on which aspect of the data or of the model is to be examined, to the “use of the data twice” sin. Obviously, this is an exploratory tool with little decisional backup and it should be understood as a qualitative rather than quantitative assessment. As mentioned in my tutorial on Sunday (I wrote this post in Duke during O’Bayes 2013), it reminded me of Ratmann et al.’s ABC_{μ} in that they both give reference distributions against which to calibrate the observed data. Most likely with a multidimensional representation. And the “use of the data twice” can be argued for or against, once a data-dependent loss function is built.

“One might worry about interpreting the significance levels of multiple tests or of tests chosen by inspection of the data (…) We do not make[a multiple test]adjustment, because we use predictive checks to see how particular aspects of the data would be expected to appear in replications. If we examine several test variables, we would not be surprised for some of them not to be fitted by the model-but if we are planning to apply the model, we might be interested in those aspects of the data that do not appear typical.”

**T**he natural objection that having a multivariate measure of discrepancy runs into multiple testing is answered within the book with the reply that the idea is not to run formal tests. I still wonder how one should behave when faced with a vector of posterior predictive p-values (ppp).

**T**he above picture is based on a normal mean/normal prior experiment I ran where the ratio prior-to-sampling variance increases from 100 to 10⁴. The ppp is based on the Bayes factor against a zero mean as a discrepancy. It thus grows away from zero very quickly and then levels up around 0.5, reaching only values close to 1 for very large values of x (i.e. never in practice). I find the graph interesting because if instead of the Bayes factor I use the marginal (numerator of the Bayes factor) then the picture is the exact opposite. Which, I presume, does not make a difference for * Bayesian Data Analysis*, since both extremes are considered as equally toxic… Still, still, still, we are is the same quandary as when using any kind of p-value: what is extreme? what is significant? Do we have again to select the dreaded 0.05?! To see how things are going, I then simulated the behaviour of the ppp under the “true” model for the pair (θ,x). And ended up with the histograms below:

which shows that under the true model the ppp does concentrate around .5 (surprisingly the range of ppp’s hardly exceeds .5 and I have no explanation for this). While the corresponding ppp does not necessarily pick any wrong model, discrepancies may be spotted by getting away from 0.5…

*“The p-value is to the u-value as the posterior interval is to the confidence interval. Just as posterior intervals are not, in general, classical confidence intervals, Bayesian p-values are not generally u-values.”*

**N**ow, * Bayesian Data Analysis* also has this warning about ppp’s being not uniform under the true model (

*u*-values), which is just as well considering the above example, but I cannot help wondering if the authors had intended a sort of subliminal message that they were not that far from uniform. And this brings back to the forefront the difficult interpretation of the numerical value of a ppp. That is, of its calibration. For evaluation of the fit of a model. Or for decision-making…

## parallel MCMC via Weirstrass sampler (a reply by Xiangyu Wang)

Posted in Books, Statistics, University life with tags big data, Chamonix, Duke University, kernel density estimator, large dimensions, likelihood-free methods, MCMC, O-Bayes 2013, parallel processing, ski, snow, untractable normalizing constant, Xiangyu Wang on January 3, 2014 by xi'an*Almost immediately after I published my comments on his paper with David Dunson, Xiangyu Wang sent a long comment that I think worth a post on its own (especially, given that I am now busy skiing and enjoying Chamonix!). So here it is:*

** T**hanks for the thoughtful comments. I did not realize that Neiswanger et al. also proposed the similar trick to avoid combinatoric problem as we did for the rejection sampler. Thank you for pointing that out.

**F**or the criticism 3 on the tail degeneration, we did not mean to fire on the non-parametric estimation issues, but rather the problem caused by using the product equation. When two densities are multiplied together, the accuracy of the product mainly depends on the tail of the two densities (the overlapping area), if there are more than two densities, the impact will be more significant. As a result, it may be unwise to directly use the product equation, as the most distant sub-posteriors could be potentially very far away from each other, and most of the sub posterior draws are outside the overlapping area. (The full Gibbs sampler formulated in our paper does not have this issue, as shown in equation 5, there is a common part multiplied on each sub-posterior, which brought them close.)

**P**oint 4 stated the problem caused by averaging. The approximated density follows Neiswanger et al. (2013) will be a mixture of Gaussian, whose component means are the average of the sub-posterior draws. Therefore, if sub-posteriors stick to different modes (assuming the true posterior is multi-modal), then the approximated density is likely to mess up the modes, and produce some faked modes (eg. average of the modes. We provide an example in the simulation 3.)

**S**orry for the vague description of the refining method (4.2). The idea is kinda dull. We start from an initial approximation to θ and then do one step Gibbs update to obtain a new θ, and we call this procedure ‘refining’, as we believe such process would bring the original approximation closer to the true posterior distribution.

**T**he first (4.1) and the second (4.2) algorithms do seem weird to be called as ‘parallel’, since they are both modified from the Gibbs sampler described in (4) and (5). The reason we want to propose these two algorithms is to overcome two problems. The first is the dimensionality curse, and the second is the issue when the subset inferences are not extremely accurate (subset effective sample size small) which might be a common scenario for logistic regression (with large parameters) even with huge data set. First, algorithm (4.1) and (4.2) both start from some initial approximations, and attempt to improve to obtain a better approximation, thus avoid the dimensional issue. Second, in our simulation 1, we attempt to pull down the performance of the simple averaging by worsening the sub-posterior performance (we allocate smaller amount of data to each subset), and the non-parametric method fails to approximate the combined density as well. However, the algorithm 4.1 and 4.2 still work in this case.

**I** have some problem with the logistic regression example provided in Neiswanger et al. (2013). As shown in the paper, under the authors’ setting (not fully specified in the paper), though the non-parametric method is better than simple averaging, the approximation error of simple averaging is small enough for practical use (I also have some problem with their error evaluation method), then why should we still bother to use a much more complicated method?

**A**ctually I’m adding a new algorithm into the Weierstrass rejection sampling, which will render it thoroughly free from the dimensionality curse of p. The new scheme is applicable to the nonparametric method in Neiswanger et al. (2013) as well. It should appear soon in the second version of the draft.

## parallel MCMC via Weirstrass sampler

Posted in Books, Statistics, University life with tags big data, Duke University, kernel density estimator, large dimensions, likelihood-free methods, MCMC, O-Bayes 2013, parallel processing, untractable normalizing constant on January 2, 2014 by xi'an**D**uring O’Bayes 2013, Xiangyu Wang and David Dunson arXived a paper (with the above title) that David then presented on the 19th. The setting is quite similar to the recently discussed embarrassingly parallel paper of Neiswanger et al., in that Xiangyu and David start from the same product representation of the target (posterior). Namely,

However, they criticise the choice made by Neiswanger et al to use MCMC approximations to each component of the product for the following reasons:

*Curse of dimensionality in the number of parameters p**Curse of dimensionality in the number of subsets m**Tail degeneration**Support inconsistency and mode misspecification Continue reading*

## O’Bayes 2013 [#3]

Posted in pictures, Running, Statistics, Travel, University life with tags Duke University, Durham, hyper-g-prior, ISBA, median density, O-Bayes 2013, parallelisation, reference priors on December 23, 2013 by xi'an**A** final day for this O’Bayes 2013 conference, where I missed the final session for travelling reasons. Several talks had highly attractive features (for me), from David Dunson’s on his recently arXived paper on parallel MCMC, that provides an alternative to the embarrassingly parallel algorithm I discussed a few weeks ago, to be discussed further in a future post, to Marty Wells hindered by poor weather and delivering by phone a talk on L1 shrinkage estimators (a bit of a paradox since, as discussed by Yuzo Maruyama, most MAP estimators cannot be minimax and, more broadly, since they cannot be expressed as resolutions of loss minimisation), to Malay Ghosh revisiting g-priors from an almost frequentist viewpoint, to Gonzalo Garci-Donato presenting criteria for objective Bayesian model choice in a vision that was clearly the closest to my own perspective on the topic. Overall, when reflecting upon the diversity and high quality of the talks at this O’Bayes meeting, and also as the incoming chair-elect of the corresponding section of ISBA, I think that what emerges most significantly from those talks is an ongoing pondering on the nature of (objective Bayesian) testing, not only in the works extending the g-priors in various directions, but also in the whole debate between Bayes factors and information criteria, model averaging versus model selection. During the discussion on Gonzalo’s talk, David Draper objected to the search for an automated approach to the comparison of models, but I strongly lean towards Gonzalo’s perspective as we need to provide a reference solution able to tackle less formal and more realistic problems. I do hope to see more of those realistic problems tackled at O’Bayes 2015 (which location is not yet settled). In the meanwhile, a strong thank you! to the local organising committee and most specifically to Jim Berger!

## O’Bayes 2013 [#2]

Posted in pictures, Running, Statistics, Travel, University life with tags copulas, Duke University, Durham, ISBA, O-Bayes 2013, physics, pseudo-likelihood, reference priors on December 19, 2013 by xi'an**A**nother day at O’Bayes 2013, recovering from the flow of reminiscences of yesterday. Talks from Guido Consonni on running reference model selection in complex designs, from Dimitris Fouskakis on integrating out imaginary observations in a *g*-prior, which seems to bring more sparsity than the hyper-*g* prior in variable selection, from François Perron on Bayesian inference for copulas, with an innovative parametrisation and links with Polya trees, from Nancy Reid and Laura Ventura on likelihood approximations and pseudo-likelihoods, offering a wide range of solutions for ABC (or BC) references (with the lingering question of the validation of the approximation for a given sample, as discussed by Brunero Liseo) and from two physicists to conclude the day! Tomorrow is the final day and I hope I can go running a last time in the woods before the flights back to Paris.

## O’Bayes 2013

Posted in Statistics, Travel, University life, Wines with tags capture-recapture, dominating measure, Duke University, Hellinger loss, ISBA, Kullback-Leibler divergence, O-Bayes 2013, posters on December 17, 2013 by xi'an**I**t was quite sad that we had to start the O-Bayes 2103 conference with the news that Dennis Lindley passed away, but the meeting is the best opportunity to share memories and stress his impact on the field. This is what happened yesterday in and around the talks. The conference(s) is/are very well-attended, with 200-some participants in total, and many young researchers. As in the earlier meetings, the talks are a mixture of “classical” objective Bayes and non-parametric Bayes (my own feeling being of a very fuzzy boundary between both perspectives, both relying to some extent on asymptotics for validation). I enjoyed in particular Jayanta’s Ghosh talk on the construction of divergence measures for reference priors that would necessarily lead to the Jeffreys prior. With the side open problem of determining whether there are only three functional distances (Hellinger, Kullback and L_{1} that are independent of the dominating measure. (Upon reflection, I am not sure about this question and whether I got it correctly, as one can always use the prior π as the dominating measure and look at divergences of the form

which seems to open up the range of possible d’s…) However, and in the great tradition of Bayesian meetings, the best part of the day was the poster session. From enjoying a (local) beer with old friends to discussing points and details. (It is just unfortunate that by 8:15 I was simply sleeping on my feet and could not complete my round of O’Bayes posters, not even mentioning EFaB posters that sounded equally attractive… I even missed discussing around a capture-recapture poster!)