Archive for STAN

Ulam’s grave [STAN post]

Posted in Books, Kids, pictures, Travel, University life with tags , , , , , , , on July 27, 2014 by xi'an

ulamSince Stan Ulam is buried in Cimetière du Montparnasse, next to CREST, Andrew and I paid his grave a visit on a sunny July afternoon. Among elaborate funeral constructions, the Aron family tomb is sober and hidden behind funeral houses. It came as a surprise to me to discover that Ulam had links with France to the point of him and his wife being buried in Ulam’s wife family vault. Since we were there, we took a short stroll to see Henri Poincaré’s tomb in the Poincaré-Boutroux vault (missing Henri’s brother, the French president Raymond Poincaré). It came as a surprise that someone had left a folder with the cover of 17 equations that changed the World on top of the tomb). Even though the book covers Poincaré’s work on the three body problem as part of Newton’s formula. There were other mathematicians in this cemetery, but this was enough necrophiliac tourism for one day.

poincare

Bayesian Data Analysis [BDA3 - part #2]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , on March 31, 2014 by xi'an

Here is the second part of my review of Gelman et al.’ Bayesian Data Analysis (third edition):

“When an iterative simulation algorithm is “tuned” (…) the iterations will not in general converge to the target distribution.” (p.297)

Part III covers advanced computation, obviously including MCMC but also model approximations like variational Bayes and expectation propagation (EP), with even a few words on ABC. The novelties in this part are centred at Stan, the language Andrew is developing around Hamiltonian Monte Carlo techniques, a sort of BUGS of the 10’s! (And of course Hamiltonian Monte Carlo techniques themselves. A few (nit)pickings: the book advises important resampling without replacement (p.266) which makes some sense when using a poor importance function but ruins the fundamentals of importance sampling. Plus, no trace of infinite variance importance sampling? of harmonic means and their dangers? In the Metropolis-Hastings algorithm, the proposal is called the jumping rule and denoted by Jt, which, besides giving the impression of a Jacobian, seems to allow for time-varying proposals and hence time-inhomogeneous Markov chains, which convergence properties are much hairier. (The warning comes much later, as exemplified in the above quote.) Moving from “burn-in” to “warm-up” to describe the beginning of an MCMC simulation. Being somewhat 90’s about convergence diagnoses (as shown by the references in Section 11.7), although the book also proposes new diagnoses and relies much more on effective sample sizes. Particle filters are evacuated in hardly half-a-page. Maybe because Stan does not handle particle filters. A lack of intuition about the Hamiltonian Monte Carlo algorithms, as the book plunges immediately into a two-page pseudo-code description. Still using physics vocabulary that put me (and maybe only me) off. Although I appreciated the advice to check analytical gradients against their numerical counterpart.

“In principle there is no limit to the number of levels of variation that can be handled in this way. Bayesian methods provide ready guidance in handling the estimation of the unknown parameters.” (p.381)

I also enjoyed reading the part about modes that stand at the boundary of the parameter space (Section 13.2), even though I do not think modes are great summaries in Bayesian frameworks and while I do not see how picking the prior to avoid modes at the boundary avoids the data impacting the prior, in fine. The variational Bayes section (13.7) is equally enjoyable, with a proper spelled-out illustration, introducing an unusual feature for Bayesian textbooks.  (Except that sampling without replacement is back!) Same comments for the Expectation Propagation (EP) section (13.8) that covers brand new notions. (Will they stand the test of time?!)

“Geometrically, if β-space is thought of as a room, the model implied by classical model selection claims that the true β has certain prior probabilities of being in the room, on the floor, on the walls, in the edge of the room, or in a corner.” (p.368)

Part IV is a series of five chapters about regression(s). This is somewhat of a classic, nonetheless  Chapter 14 surprised me with an elaborate election example that dabbles in advanced topics like causality and counterfactuals. I did not spot any reference to the g-prior or to its intuitive justifications and the chapter mentions the lasso as a regularisation technique, but without any proper definition of this “popular non-Bayesian form of regularisation” (p.368). In French: with not a single equation! Additional novelty may lie in the numerical prior information about the correlations. What is rather crucially (cruelly?) missing though is a clearer processing of variable selection in regression models. I know Andrew opposes any notion of a coefficient being exactly equal to zero, as ridiculed through the above quote, but the book does not reject model selection, so why not in this context?! Chapter 15 on hierarchical extensions stresses the link with exchangeability, once again. With another neat election example justifying the progressive complexification of the model and the cranks and toggles of model building. (I am not certain the reparameterisation advice on p.394 is easily ingested by a newcomer.) The chapters on robustness (Chap. 17) and missing data (Chap. 18) sound slightly less convincing to me, esp. the one about robustness as I never got how to make robustness agree with my Bayesian perspective. The book states “we do not have to abandon Bayesian principles to handle outliers” (p.436), but I would object that the Bayesian paradigm compels us to define an alternative model for those outliers and the way they are produced. One can always resort to a drudging exploration of which subsample of the dataset is at odds with the model but this may be unrealistic for large datasets and further tells us nothing about how to handle those datapoints. The missing data chapter is certainly relevant to such a comprehensive textbook and I liked the survey illustration where the missing data was in fact made of missing questions. However, I felt the multiple imputation part was not well-presented, fearing readers would not understand how to handle it…

“You can use MCMC, normal approximation, variational Bayes, expectation propagation, Stan, or any other method. But your fit must be Bayesian.” (p.517)

Part V concentrates the most advanced material, with Chapter 19 being mostly an illustration of a few complex models, slightly superfluous in my opinion, Chapter 20 a very short introduction to functional bases, including a basis selection section (20.2) that implements the “zero coefficient” variable selection principle refuted in the regression chapter(s), and does not go beyond splines (what about wavelets?), Chapter 21 a (quick) coverage of Gaussian processes with the motivating birth-date example (and two mixture datasets I used eons ago…), Chapter 22 a more (too much?) detailed study of finite mixture models, with no coverage of reversible-jump MCMC, and Chapter 23 an entry on Bayesian non-parametrics through Dirichlet processes.

“In practice, for well separated components, it is common to remain stuck in one labelling across all the samples that are collected. One could argue that the Gibbs sampler has failed in such a case.” (p.535)

To get back to mixtures, I liked the quote about the label switching issue above, as I was “one” who argued that the Gibbs sampler fails to converge! The corresponding section seems to favour providing a density estimate for mixture models, rather than component-wise evaluations, but it nonetheless mentions the relabelling by permutation approach (if missing our 2000 JASA paper). The section about inferring on the unknown number of components suggests conducting a regular Gibbs sampler on a model with an upper bound on the number of components and then checking for empty components, an idea I (briefly) considered in the mid-1990’s before the occurrence of RJMCMC. Of course, the prior on the components matters and the book suggests using a Dirichlet with fixed sum like 1 on the coefficients for all numbers of components.

“14. Objectivity and subjectivity: discuss the statement `People tend to believe results that support their preconceptions and disbelieve results that surprise them. Bayesian methods tend to encourage this undisciplined mode of thinking.’¨ (p.100)

Obviously, this being a third edition begets the question, what’s up, doc?!, i.e., what’s new [when compared with the second edition]? Quite a lot, even though I am not enough of a Gelmanian exegist to produce a comparision table. Well, for a starter, David Dunson and Aki Vethtari joined the authorship, mostly contributing to the advanced section on non-parametrics, Gaussian processes, EP algorithms. Then the Hamiltonian Monte Carlo methodology and Stan of course, which is now central to Andrew’s interests. The book does include a short Appendix on running computations in R and in Stan. Further novelties were mentioned above, like the vision of weakly informative priors taking over noninformative priors but I think this edition of Bayesian Data Analysis puts more stress on clever and critical model construction and on the fact that it can be done in a Bayesian manner. Hence the insistence on predictive and cross-validation tools. The book may be deemed somewhat short on exercices, providing between 3 and 20 mostly well-developed problems per chapter, often associated with datasets, rather than the less exciting counter-example above. Even though Andrew disagrees and his students at ENSAE this year certainly did not complain, I personally feel a total of 220 exercices is not enough for instructors and self-study readers. (At least, this reduces the number of email requests for solutions! Esp. when 50 of those are solved on the book website.) But this aspect is a minor quip: overall this is truly the reference book for a graduate course on Bayesian statistics and not only Bayesian data analysis.

Bayesian Data Analysis [BDA3]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , on March 28, 2014 by xi'an

Andrew Gelman and his coauthors, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin, have now published the latest edition of their book Bayesian Data Analysis. David and Aki are newcomers to the authors’ list, with an extended section on non-linear and non-parametric models. I have been asked by Sam Behseta to write a review of this new edition for JASA (since Sam is now the JASA book review editor). After wondering about my ability to produce an objective review (on the one hand, this is The Competition  to Bayesian Essentials!, on the other hand Andrew is a good friend spending the year with me in Paris), I decided to jump for it and write a most subjective review, with the help of Clara Grazian who was Andrew’s teaching assistant this year in Paris and maybe some of my Master students who took Andrew’s course. The second edition was reviewed in the September 2004 issue of JASA and we now stand ten years later with an even more impressive textbook. Which truly what Bayesian data analysis should be.

This edition has five parts, Fundamentals of Bayesian Inference, Fundamentals of Bayesian Data Analysis, Advanced Computation, Regression Models, and Non-linear and Non-parametric Models, plus three appendices. For a total of xiv+662 pages. And a weight of 2.9 pounds (1395g on my kitchen scale!) that makes it hard to carry around in the metro…. I took it to Warwick (and then Nottingham and Oxford and back to Paris) instead.

We could avoid the mathematical effort of checking the integrability of the posterior density (…) The result would clearly show the posterior contour drifting off toward infinity.” (p.111)

While I cannot go into a detailed reading of those 662 pages (!), I want to highlight a few gems. (I already wrote a detailed and critical analysis of Chapter 6 on model checking in that post.) The very first chapter provides all the necessary items for understanding Bayesian Data Analysis without getting bogged in propaganda or pseudo-philosophy. Then the other chapters of the first part unroll in a smooth way, cruising on the B highway… With the unique feature of introducing weakly informative priors (Sections 2.9 and 5.7), like the half-Cauchy distribution on scale parameters. It may not be completely clear how weak a weakly informative prior, but this novel notion is worth including in a textbook. Maybe a mild reproach at this stage: Chapter 5 on hierarchical models is too verbose for my taste, as it essentially focus on the hierarchical linear model. Of course, this is an essential chapter as it links exchangeability, the “atom” of Bayesian reasoning used by de Finetti, with hierarchical models. Still. Another comment on that chapter: it broaches on the topic of improper posteriors by suggesting to run a Markov chain that can exhibit improperness by enjoying an improper behaviour. When it happens as in the quote above, fine!, but there is no guarantee this is always the case! For instance, improperness may be due to regions near zero rather than infinity. And a last barb: there is a dense table (Table 5.4, p.124) that seems to run contrariwise to Andrew’s avowed dislike of tables. I could also object at the idea of a “true prior distribution” (p.128), or comment on the trivia that hierarchical chapters seem to attract rats (as I also included a rat example in the hierarchical Bayes chapter of Bayesian Choice and so does the BUGS Book! Hence, a conclusion that Bayesian textbooks are better be avoided by muriphobiacs…)

“Bayes factors do not work well for models that are inherently continuous (…) Because we emphasize continuous families of models rather than discrete choices, Bayes factors are rarely relevant in our approach to Bayesian statistics.” (p.183 & p.193)

Part II is about “the creative choices that are required, first to set up a Bayesian model in a complex problem, then to perform the model checking and confidence building that is typically necessary to make posterior inferences scientifically defensible” (p.139). It is certainly one of the strengths of the book that it allows for a critical look at models and tools that are rarely discussed in more theoretical Bayesian books. As detailed in my  earlier post on Chapter 6, model checking is strongly advocated, via posterior predictive checks and… posterior predictive p-values, which are at best empirical indicators that something could be wrong, definitely not that everything’s allright! Chapter 7 is the model comparison equivalent of Chapter 6, starting with the predictive density (aka the evidence or the marginal likelihood), but completely bypassing the Bayes factor for information criteria like the Watanabe-Akaike or widely available information criterion (WAIC), and advocating cross-validation, which is empirically satisfying but formally hard to integrate within a full Bayesian perspective. Chapter 8 is about data collection, sample surveys, randomization and related topics, another entry that is missing from most Bayesian textbooks, maybe not that surprising given the research topics of some of the authors. And Chapter 9 is the symmetric in that it focus on the post-modelling step of decision making.

(Second part of the review to appear on Monday, leaving readers the weekend to recover!)

MCMSki IV [day 3]

Posted in Mountains, pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on January 9, 2014 by xi'an

ridge5Already on the final day..! And still this frustration in being unable to attend three sessions at once… Andrew Gelman started the day with a non-computational talk that broached on themes that are familiar to readers of his blog, on the misuse of significance tests and on recommendations for better practice. I then picked the Scaling and optimisation of MCMC algorithms session organised by Gareth Roberts, with optimal scaling talks by Tony Lelièvre, Alex Théry and Chris Sherlock, while Jochen Voss spoke about the convergence rate of ABC, a paper I already discussed on the blog. A fairly exciting session showing that MCMC’ory (name of a workshop I ran in Paris in the late 90’s!) is still well and alive!

After the break (sadly without the ski race!), the software round-table session was something I was looking for. The four softwares covered by this round-table were BUGS, JAGS, STAN, and BiiPS, each presented according to the same pattern. I would have like to see a “battle of the bands”, illustrating pros & cons for each language on a couple of models & datasets. STAN got the officious prize for cool tee-shirts (we should have asked the STAN team for poster prize tee-shirts). And I had to skip the final session for a flu-related doctor appointment…

I called for a BayesComp meeting at 7:30, hoping for current and future members to show up and discuss the format of the future MCMski meetings, maybe even proposing new locations on other “sides of the Italian Alps”! But (workshop fatigue syndrome?!), no-one showed up. So anyone interested in discussing this issue is welcome to contact me or David van Dyk, the new BayesComp program chair.

Bayes on drugs (guest post)

Posted in Books, R, Statistics, University life with tags , , , , , , , on May 21, 2012 by xi'an

This post is written by Julien Cornebise.

Last week in Aachen was the 3rd Edition of the Bayes(Pharma) workshop. Its specificity: half-and-half industry/academic participants and speakers, all in Pharmaceutical statistics, with a great care to welcome newcomers to Bayes, so as to spread as much as possible the love where it will actually be used. First things first: all the slides are available online, thanks to the speakers for sharing those. Full disclaimer: being part of the scientific committee of the workshop, I had a strong subjective prior.

3 days, 70 participants, we were fully booked, and even regretfully had to refuse inscriptions due to lack of room-space (!! German regulations are quite… enforced). Time to size it up for next year, maybe?

My most vivid impression overall: I was struck by the interactivity of the questions/answers after each talk. Rarely fewer than 5 questions per talk (come on, we’ve all attended sessions where the chairman is forced to ask the lone question — no such thing here!), on all points of each talk, with cross-references from one question to the other, even from one *talk* to the other! Seeing so much interaction and discussion in spite of (or, probably, thanks to ?) the diversity of the audience was a real treat: not only did the questions bring up additional details about the talk, they were, more importantly, bringing very precious highlight on the questioners’ mindsets, their practical concerns and needs. Both academics and industrials were learning on all counts — and, for having sometimes seen failed marriages of the kind in the past (either a French round-table degenerating in nasty polemic on “research-induced tax credit”, or just plain mismatch of interests), I was quite impressed that we were purely and simply all interested in multiple facets of the very same thing: the interface between pharma and stats.

As is now a tradition, the first day was a short course, this time by Pr. Emmanuel Lessaffre: based on his upcoming book on Bayesian Biostatistics (Xian, maybe a review someday?), it was meant to be introductory for newcomers to Bayes, but was still packed with enough “tricks of the trades” that even seasoned Bayesians could get something out of it. I very much appreciated the pedagogy in the “live” examples, with clear convergence caveats based on traceplots of common software (WinBUGS). The most vivid memory: his strong spotlight on INLA as “the future of Bayesian computation”. Although my research is mostly on MCMC/SMC, I’m now damn curious to give it a serious try — this was further reinforced by late evening discussions with Gianluca BaioM, who revealed that all his results that were all obtained in seconds of INLA computing.

Day 2 and half-day 3 were invited and contributed talks, all motivated by top-level applications. No convergence theorems here, but practical issues, with constraints that theoreticians (including myself!) would hardly guess exist: very small sample sizes, regulatory issues, concurrence with legacy methodology with only seconds-long runtime (impossible to run 1 million MCMC steps!), and sometimes even imposed software due to validation processes! Again, as stated above, the number and quality of questions is really what I will keep from those 2 days.

If I had to state one regret, maybe, it would be this unsatisfactory feeling that, for many newcomers, MCMC = WinBUGS — with its obvious restrictions. The lesson I learned: all the great methodological advances of the last 10 years, especially in Adaptive MCMC, have not yet reached most practitioners yet, since they need *tools* they can use. It may be a sign that, as methodological researchers, we should maybe put a stronger emphasis on bringing software packages forward (for R, of course, but also for JAGS or OpenBUGS!); not only a zip-file with our article’s codes, but a full-fledged package, with ongoing support, maintenance, and forum. That’s a tough balance to find, since the time maintaining a package does not count in the holy-bibliometry… but doesn’t it have more actual impact? Besides, more packages = less papers but also = more citations of the corresponding paper. Some do take this road (Robert Gramacy’s packages were cited last week as examples of great support, and Andy Gelman and Matt Hoffman are working on the much-expected STAN, and I mentioned above Havard Rue’s R-INLA), but I don’t think it is yet considered “best practices”.

As a conclusion, this Bayes-Pharma 2012 workshop reminded me a lot of the SAMSI 2010 Summer Program: while Bayes-Pharma aims to be much more introductory, they had in common this same success in blending pharma-industry and academy. Could it be a specificity of pharma? In which case, I’m looking very much forward opening ISBA’s Specialized Section on Biostat/Pharmastat that a few colleagues and I are currently working on (more on this here soon). With such a crowd on both sides of the Atlantic, and a looming Bayes 2013 in the Netherlands, that will be exciting.

Follow

Get every new post delivered to your Inbox.

Join 619 other followers