Archive for Bayesian hypothesis testing

A Milano [not jatp]

Posted in Kids, Mountains, pictures, Statistics, Travel, University life, Wines with tags , , , , , , , , on October 7, 2016 by xi'an

Today, I went to Milano for 13 hours to give a seminar at l’Università Bocconi. Where I thus gave a talk on Testing via mixtures (using the same slides as at ISBA last Spring). It was the first time I was in Milano (and thus at Bocconi) for more than a transfer to MCMski or to Pavia and it was great to walk through the city. And of course to meet and share with many friends there. While I glimpsed the end of the sunrise on the Italian Alps (near Monte Rosa?!), I was too late on my way back for the sunset.

contemporary issues in hypothesis testing

Posted in Statistics with tags , , , , , , , , , , , , , , , , , , on September 26, 2016 by xi'an

hipocontemptThis week [at Warwick], among other things, I attended the CRiSM workshop on hypothesis testing, giving the same talk as at ISBA last June. There was a most interesting and unusual talk by Nick Chater (from Warwick) about the psychological aspects of hypothesis testing, namely about the unnatural features of an hypothesis in everyday life, i.e., how far this formalism stands from human psychological functioning.  Or what we know about it. And then my Warwick colleague Tom Nichols explained how his recent work on permutation tests for fMRIs, published in PNAS, testing hypotheses on what should be null if real data and getting a high rate of false positives, got the medical imaging community all up in arms due to over-simplified reports in the media questioning the validity of 15 years of research on fMRI and the related 40,000 papers! For instance, some of the headings questioned the entire research in the area. Or transformed a software bug missing the boundary effects into a major flaw.  (See this podcast on Not So Standard Deviations for a thoughtful discussion on the issue.) One conclusion of this story is to be wary of assertions when submitting a hot story to journals with a substantial non-scientific readership! The afternoon talks were equally exciting, with Andrew explaining to us live from New York why he hates hypothesis testing and prefers model building. With the birthday model as an example. And David Draper gave an encompassing talk about the distinctions between inference and decision, proposing a Jaynes information criterion and illustrating it on Mendel‘s historical [and massaged!] pea dataset. The next morning, Jim Berger gave an overview on the frequentist properties of the Bayes factor, with in particular a novel [to me] upper bound on the Bayes factor associated with a p-value (Sellke, Bayarri and Berger, 2001)

B¹⁰(p) ≤ 1/-e p log p

with the specificity that B¹⁰(p) is not testing the original hypothesis [problem] but a substitute where the null is the hypothesis that p is uniformly distributed, versus a non-parametric alternative that p is more concentrated near zero. This reminded me of our PNAS paper on the impact of summary statistics upon Bayes factors. And of some forgotten reference studying Bayesian inference based solely on the p-value… It is too bad I had to rush back to Paris, as this made me miss the last talks of this fantastic workshop centred on maybe the most important aspect of statistics!

non-local priors for mixtures

Posted in Statistics, University life with tags , , , , , , , , , , , , , , , on September 15, 2016 by xi'an

[For some unknown reason, this commentary on the paper by Jairo Fúquene, Mark Steel, David Rossell —all colleagues at Warwick— on choosing mixture components by non-local priors remained untouched in my draft box…]

Choosing the number of components in a mixture of (e.g., Gaussian) distributions is a hard problem. It may actually be an altogether impossible problem, even when abstaining from moral judgements on mixtures. I do realise that the components can eventually be identified as the number of observations grows to infinity, as demonstrated foFaith, Barossa Valley wine: strange name for a Shiraz (as it cannot be a mass wine!, but nice flavoursr instance by Judith Rousseau and Kerrie Mengersen (2011). But for a finite and given number of observations, how much can we trust any conclusion about the number of components?! It seems to me that the criticism about the vacuity of point null hypotheses, namely the logical absurdity of trying to differentiate θ=0 from any other value of θ, applies to the estimation or test on the number of components of a mixture. Doubly so, one might argue, since a very small or a very close component is undistinguishable from a non-existing one. For instance, Definition 2 is correct from a mathematical viewpoint, but it does not spell out the multiple contiguities between k and k’ component mixtures.

The paper starts with a comprehensive coverage of l’état de l’art… When using a Bayes factor to compare a k-component and an h-component mixture, the behaviour of the factor is quite different depending on which model is correct. Essentially overfitted mixtures take much longer to detect than underfitted ones, which makes intuitive sense. And BIC should be corrected for overfitted mixtures by a canonical dimension λ between the true and the (larger) assumed number of parameters  into

2 log m(y) = 2 log p(y|θ) – λ log O(n) + O(log log n)

I would argue that this purely invalidates BIG in mixture settings since the canonical dimension λ is unavailable (and DIC does not provide a useful substitute as we illustrated a decade ago…) The criticism about Rousseau and Mengersen (2011) over-fitted mixture that their approach shrinks less than a model averaging over several numbers of components relates to minimaxity and hence sounds both overly technical and reverting to some frequentist approach to testing. Replacing testing with estimating sounds like the right idea.  And I am also unconvinced that a faster rate of convergence of the posterior probability or of the Bayes factor is a relevant factor when conducting

As for non local priors, the notion seems to rely on a specific topology for the parameter space since a k-component mixture can approach a k’-component mixture (when k'<k) in a continuum of ways (even for a given parameterisation). This topology seems to be summarised by the penalty (distance?) d(θ) in the paper. Is there an intrinsic version of d(θ), given the weird parameter space? Like one derived from the Kullback-Leibler distance between the models? The choice of how zero is approached clearly has an impact on how easily the “null” is detected, the more because of the somewhat discontinuous nature of the parameter space. Incidentally, I find it curious that only the distance between means is penalised… The prior also assumes independence between component parameters and component weights, which I think is suboptimal in dealing with mixtures, maybe suboptimal in a poetic sense!, as we discussed in our reparameterisation paper. I am not sure either than the speed the distance converges to zero (in Theorem 1) helps me to understand whether the mixture has too many components for the data’s own good when I can run a calibration experiment under both assumptions.

While I appreciate the derivation of a closed form non-local prior, I wonder at the importance of the result. Is it because this leads to an easier derivation of the posterior probability? I do not see the connection in Section 3, except maybe that the importance weight indeed involves this normalising constant when considering several k’s in parallel. Is there any convergence issue in the importance sampling solution of (3.1) and (3.3) since the simulations are run under the local posterior? While I appreciate the availability of an EM version for deriving the MAP, a fact I became aware of only recently, is it truly bringing an improvement when compared with picking the MCMC simulation with the highest completed posterior?

The section on prior elicitation is obviously of central interest to me! It however seems to be restricted to the derivation of the scale factor g, in the distance, and of the parameter q in the Dirichlet prior on the weights. While the other parameters suffer from being allocated the conjugate-like priors. I would obviously enjoy seeing how this approach proceeds with our non-informative prior(s). In this regard, the illustration section is nice, but one always wonders at the representative nature of the examples and the possible interpretations of real datasets. For instance, when considering that the Old Faithful is more of an HMM than a mixture.

reversible chain[saw] massacre

Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , , on May 16, 2016 by xi'an

A paper in Nature this week that uses reversible-jump MCMC, phylogenetic trees, and Bayes factors. And that looks at institutionalised or ritual murders in Austronesian cultures. How better can it get?!

“by applying Bayesian phylogenetic methods (…) we find strong support for models in which human sacrifice stabilizes social stratification once stratification has arisen, and promotes a shift to strictly inherited class systems.” Joseph Watts et al.

The aim of the paper is to establish that societies with human sacrifices are more likely to have become stratified and stable than societies without such niceties. The hypothesis to be tested is then about the evolution towards more stratified societies rather the existence of a high level of stratification.

“The social control hypothesis predicts that human sacrifice (i) co-evolves with social stratification, (ii) increases the chance of a culture gaining social stratification, and (iii) reduces the chance of a culture losing social stratification once stratification has arisen.” Joseph Watts et al.

The methodological question is then how can this be tested when considering those are extinct societies about which little is known. Grouping together moderate and high stratification societies against egalitarian societies, the authors tested independence of both traits versus dependence, with a resulting Bayes factor of 3.78 in favour of the latest. Other hypotheses of a similar flavour led to Bayes factors in the same range. Which is thus not overwhelming. Actually, given that the models are quite simplistic, I do not agree that those Bayes factors prove anything of the magnitude of such anthropological conjectures. Even if the presence/absence of human sacrifices is confirmed in all of the 93 societies, and if the stratification of the cultures is free from uncertainties, the evolutionary part is rather involved, from my neophyte point of view: the evolutionary structure (reproduced above) is based on a sample of 4,200 trees based on Bayesian analysis of Austronesian basic vocabulary items, followed by a call to the BayesTrait software to infer about evolution patterns between stratification levels, concluding (with p-values!) at a phylogenetic structure of the data. BayesTrait was also instrumental in deriving MLEs for the various transition rates, “in order to inform our choice of priors” (!). BayesTrait has an MCMC function used by the authors “to test for correlated evolution between traits” and derive the above Bayes factors. Using a stepping-stone method I am unaware of. And 10⁹ iterations (repeated 3 times for checking consistency)… Reversible jump is apparently used to move between constrained and unconstrained models, leading to the pie charts at the inner nodes of the above picture. Again a by-product of BayesTrait. The trees on the left and the right are completely identical, the difference being in the inference about stratification evolution (right) and sacrifice evolution (left). While the overall hypothesis makes sense at my layman level (as a culture has to be stratified enough to impose sacrifices from its members), I am not convinced that this involved statistical analysis brings that strong a support. (But it would make a fantastic topic for an undergraduate or a Master thesis!)

seminar in Harvard

Posted in Statistics, Travel with tags , , , , , , , , , , on March 16, 2016 by xi'an

harvard2103Next week, I will be in Harvard Monday and Tuesday, visiting friends in the Department of Statistics and giving a seminar. The slides for the talk will be quite similar to those of my talk in Bristol, a few weeks ago. Hopefully, there will not be too much overlap between both audiences! And hopefully I’ll manage to get to my conclusion before all hell breaks loose (which is why I strategically set my conclusion in the early slides!)

contemporary issues on hypothesis testing

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , on March 2, 2016 by xi'an

the pond in front of the Zeeman building, University of Warwick, July 01, 2014At Warwick U., CRiSM is sponsoring another workshop, this time on hypothesis testing next Sept. 15 and 16 (just before the Glencoe race!). Registration and poster submission are already open. Most obviously, given my current interests, I am quite excited by the prospect of taking part in this workshop (and sorry that Andrew can only take part by teleconference!).

read paper [in Bristol]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on January 29, 2016 by xi'an

Clifton & Durdham Downs, Bristol, Sept. 25, 2012I went to give a seminar in Bristol last Friday and I chose to present the testing with mixture paper. As we are busy working on the revision, I was eagerly looking for comments and criticisms that could strengthen this new version. As it happened, the (Bristol) Bayesian Cake (Reading) Club had chosen our paper for discussion, two weeks in a row!, hence the title!, and I got invited to join the group the morning prior to the seminar! This was, of course, most enjoyable and relaxed, including an home-made cake!, but also quite helpful in assessing our arguments in the paper. One point of contention or at least of discussion was the common parametrisation between the components of the mixture. Although all parametrisations are equivalent from a single component point of view, I can [almost] see why using a mixture with the same parameter value on all components may impose some unsuspected constraint on that parameter. Even when the parameter is the same moment for both components. This still sounds like a minor counterpoint in that the weight should converge to either zero or one and hence eventually favour the posterior on the parameter corresponding to the “true” model.

Another point that was raised during the discussion is the behaviour of the method under misspecification or for an M-open framework: when neither model is correct does the weight still converge to the boundary associated with the closest model (as I believe) or does a convexity argument produce a non-zero weight as it limit (as hinted by one example in the paper)? I had thought very little about this and hence had just as little to argue though as this does not sound to me like the primary reason for conducting tests. Especially in a Bayesian framework. If one is uncertain about both models to be compared, one should have an alternative at the ready! Or use a non-parametric version, which is a direction we need to explore deeper before deciding it is coherent and convergent!

A third point of discussion was my argument that mixtures allow us to rely on the same parameter and hence the same prior, whether proper or not, while Bayes factors are less clearly open to this interpretation. This was not uniformly accepted!

Thinking afresh about this approach also led me to broaden my perspective on the use of the posterior distribution of the weight(s) α: while previously I had taken those weights mostly as a proxy to the posterior probabilities, to be calibrated by pseudo-data experiments, as for instance in Figure 9, I now perceive them primarily as the portion of the data in agreement with the corresponding model [or hypothesis] and more importantly as a solution for staying away from a Neyman-Pearson-like decision. Or error evaluation. Usually, when asked about the interpretation of the output, my answer is to compare the behaviour of the posterior on the weight(s) with a posterior associated with a sample from each model. Which does sound somewhat similar to posterior predictives if the samples are simulated from the associated predictives. But the issue was not raised during the visit to Bristol, which possibly reflects on how unfrequentist the audience was [the Statistics group is], as it apparently accepted with no further ado the use of a posterior distribution as a soft assessment of the comparative fits of the different models. If not necessarily agreeing the need of conducting hypothesis testing (especially in the case of the Pima Indian dataset!).