Archive for graduate course

Masterclass in Bayesian Asymptotics, Université Paris Dauphine, 18-22 March 2024

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , on December 8, 2023 by xi'an

On the week of 18-22 March 2024, Judith Rousseau (Paris Dauphine & Oxford) will teach a Masterclass on Bayesian asymptotics. The masterclass takes place in Paris (on the PariSanté Campus) and consists of morning lectures and afternoon labs. Attendance is free with compulsory registration before 11 March (since the building is not accessible without prior registration).

The plan of the course is as follows

Part I: Parametric models
In this part, well- and mis-specified models will be considered.
– Asymptotic posterior distribution: asymptotic normality of the posterior,  penalization induced by the prior and the Bernstein von – Mises theorem. Regular and nonregular models will be treated.
– marginal likelihood and consistency of Bayes factors/model selection approaches.
– Empirical Bayes methods: asymptotic posterior distribution for parametric empirical Bayes methods.

Part II: Nonparametric and semiparametric models
– Posterior consistency and posterior convergence rates: statistical loss functions using the theory initiated by L. Schwartz and developed by Ghosal and Van der Vaart, results on less standard or well behaved losses.
– semiparametric Bernstein von Mises theorems.
– nonparametric Bernstein von Mises theorems and Uncertainty quantification.
– Stepping away from pure Bayes approaches: generalized Bayes, one step posteriors and cut posteriors.

Couplings and Monte Carlo [advanced graduate course at Dauphine by Pierre Jacob]

Posted in Kids, pictures, Statistics, Travel with tags , , , , , , on January 20, 2020 by xi'an

As a visiting professor at Paris-Dauphine next month, Pierre Jacob will give a series of lectures on coupling and Monte Carlo. Next month on Feb. 13, 14, 25 27, at Université Paris-Dauphine, the first two starting at 8:30 (room E) and the last two starting at 13:45 (room F and D201, respectively). Attendance is open to all and material will be made available on the lecture webpage.

difficult times for postdocs

Posted in Kids, Travel, University life with tags , , , , , on July 16, 2016 by xi'an

Flight to Montpellier, Dec. 06, 2011In the plane to Warwick on Monday, I was reading my latest issue of Nature and found an interesting editorial on the financial plight of many graduates and post-docs in both the US and the UK (and certainly elsewhere). Who, despite having a fellowship, cannot make ends meet. This is particularly true in expensive cities like London, Oxford or even Paris, where rents force those new researchers to face long commuting hours. The editorial suggests taking extra-jobs to make up for financial difficulties, but this does not sound to me like a particularly pertinent recommendation if it means taking time off one’s research, at the period in a researcher’s career where one’s energy should be mostly directed at the production of papers towards securing a (more) permanent job. Even teaching can prove too time consuming for finishing PhD students. An adequation between the needs of those young researchers and the institutional support they receive would sound like a natural requirement, while graduates looking for fellowship should truly assess the adequation in detail before accepting an offer.Which of course is not always easy. In countries where post-doctoral contracts are not negotiable and are set at a national level (like, e.g., France), checking with earlier fellows is a must. (As it happens or happened, I was quite lucky to spend my post-doctoral years in cheap places with decent support from the local universities, but this is not relevant in today’s environment!)

marauders of the lost sciences

Posted in Books, Statistics, University life with tags , , , , , , on October 26, 2014 by xi'an

The editors of a new blog entitled Marauders of the Lost Sciences (Learn from the giants) sent me an email to signal the start of this blog with a short excerpt from a giant in maths or stats posted every day:

There is  a new blog I wanted to tell you 
about which  excerpts one  interesting or 
classic  paper  or  book  a day  from the 
mathematical  sciences.  We plan on daily
posting across the  range of mathematical 
fields and at any level, but about 20-30% 
of the posts in queue are from statistics.

The goal is to entice people to read the great 
works of old.

The first post today was from an old paper by 
Fisher applying Group Theory to the design of 
experiments.

Interesting concept, which will hopefully generate comments to put the quoted passage into context. Somewhat connected to my Reading Statistical Classics posts. Which incidentally if sadly will not take place this year since only two students registered. should take place in the end since more students registered! (I am unsure about the references behind the title of that blog, besides Spielberg’s Raiders of the Lost Ark and Norman’s Marauders of Gor… I just hope Statistics does not qualify as a lost science!)

Bayesian Data Analysis [BDA3 – part #2]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , on March 31, 2014 by xi'an

Here is the second part of my review of Gelman et al.’ Bayesian Data Analysis (third edition):

“When an iterative simulation algorithm is “tuned” (…) the iterations will not in general converge to the target distribution.” (p.297)

Part III covers advanced computation, obviously including MCMC but also model approximations like variational Bayes and expectation propagation (EP), with even a few words on ABC. The novelties in this part are centred at Stan, the language Andrew is developing around Hamiltonian Monte Carlo techniques, a sort of BUGS of the 10’s! (And of course Hamiltonian Monte Carlo techniques themselves. A few (nit)pickings: the book advises important resampling without replacement (p.266) which makes some sense when using a poor importance function but ruins the fundamentals of importance sampling. Plus, no trace of infinite variance importance sampling? of harmonic means and their dangers? In the Metropolis-Hastings algorithm, the proposal is called the jumping rule and denoted by Jt, which, besides giving the impression of a Jacobian, seems to allow for time-varying proposals and hence time-inhomogeneous Markov chains, which convergence properties are much hairier. (The warning comes much later, as exemplified in the above quote.) Moving from “burn-in” to “warm-up” to describe the beginning of an MCMC simulation. Being somewhat 90’s about convergence diagnoses (as shown by the references in Section 11.7), although the book also proposes new diagnoses and relies much more on effective sample sizes. Particle filters are evacuated in hardly half-a-page. Maybe because Stan does not handle particle filters. A lack of intuition about the Hamiltonian Monte Carlo algorithms, as the book plunges immediately into a two-page pseudo-code description. Still using physics vocabulary that put me (and maybe only me) off. Although I appreciated the advice to check analytical gradients against their numerical counterpart.

“In principle there is no limit to the number of levels of variation that can be handled in this way. Bayesian methods provide ready guidance in handling the estimation of the unknown parameters.” (p.381)

I also enjoyed reading the part about modes that stand at the boundary of the parameter space (Section 13.2), even though I do not think modes are great summaries in Bayesian frameworks and while I do not see how picking the prior to avoid modes at the boundary avoids the data impacting the prior, in fine. The variational Bayes section (13.7) is equally enjoyable, with a proper spelled-out illustration, introducing an unusual feature for Bayesian textbooks.  (Except that sampling without replacement is back!) Same comments for the Expectation Propagation (EP) section (13.8) that covers brand new notions. (Will they stand the test of time?!)

“Geometrically, if β-space is thought of as a room, the model implied by classical model selection claims that the true β has certain prior probabilities of being in the room, on the floor, on the walls, in the edge of the room, or in a corner.” (p.368)

Part IV is a series of five chapters about regression(s). This is somewhat of a classic, nonetheless  Chapter 14 surprised me with an elaborate election example that dabbles in advanced topics like causality and counterfactuals. I did not spot any reference to the g-prior or to its intuitive justifications and the chapter mentions the lasso as a regularisation technique, but without any proper definition of this “popular non-Bayesian form of regularisation” (p.368). In French: with not a single equation! Additional novelty may lie in the numerical prior information about the correlations. What is rather crucially (cruelly?) missing though is a clearer processing of variable selection in regression models. I know Andrew opposes any notion of a coefficient being exactly equal to zero, as ridiculed through the above quote, but the book does not reject model selection, so why not in this context?! Chapter 15 on hierarchical extensions stresses the link with exchangeability, once again. With another neat election example justifying the progressive complexification of the model and the cranks and toggles of model building. (I am not certain the reparameterisation advice on p.394 is easily ingested by a newcomer.) The chapters on robustness (Chap. 17) and missing data (Chap. 18) sound slightly less convincing to me, esp. the one about robustness as I never got how to make robustness agree with my Bayesian perspective. The book states “we do not have to abandon Bayesian principles to handle outliers” (p.436), but I would object that the Bayesian paradigm compels us to define an alternative model for those outliers and the way they are produced. One can always resort to a drudging exploration of which subsample of the dataset is at odds with the model but this may be unrealistic for large datasets and further tells us nothing about how to handle those datapoints. The missing data chapter is certainly relevant to such a comprehensive textbook and I liked the survey illustration where the missing data was in fact made of missing questions. However, I felt the multiple imputation part was not well-presented, fearing readers would not understand how to handle it…

“You can use MCMC, normal approximation, variational Bayes, expectation propagation, Stan, or any other method. But your fit must be Bayesian.” (p.517)

Part V concentrates the most advanced material, with Chapter 19 being mostly an illustration of a few complex models, slightly superfluous in my opinion, Chapter 20 a very short introduction to functional bases, including a basis selection section (20.2) that implements the “zero coefficient” variable selection principle refuted in the regression chapter(s), and does not go beyond splines (what about wavelets?), Chapter 21 a (quick) coverage of Gaussian processes with the motivating birth-date example (and two mixture datasets I used eons ago…), Chapter 22 a more (too much?) detailed study of finite mixture models, with no coverage of reversible-jump MCMC, and Chapter 23 an entry on Bayesian non-parametrics through Dirichlet processes.

“In practice, for well separated components, it is common to remain stuck in one labelling across all the samples that are collected. One could argue that the Gibbs sampler has failed in such a case.” (p.535)

To get back to mixtures, I liked the quote about the label switching issue above, as I was “one” who argued that the Gibbs sampler fails to converge! The corresponding section seems to favour providing a density estimate for mixture models, rather than component-wise evaluations, but it nonetheless mentions the relabelling by permutation approach (if missing our 2000 JASA paper). The section about inferring on the unknown number of components suggests conducting a regular Gibbs sampler on a model with an upper bound on the number of components and then checking for empty components, an idea I (briefly) considered in the mid-1990’s before the occurrence of RJMCMC. Of course, the prior on the components matters and the book suggests using a Dirichlet with fixed sum like 1 on the coefficients for all numbers of components.

“14. Objectivity and subjectivity: discuss the statement `People tend to believe results that support their preconceptions and disbelieve results that surprise them. Bayesian methods tend to encourage this undisciplined mode of thinking.’¨ (p.100)

Obviously, this being a third edition begets the question, what’s up, doc?!, i.e., what’s new [when compared with the second edition]? Quite a lot, even though I am not enough of a Gelmanian exegist to produce a comparision table. Well, for a starter, David Dunson and Aki Vethtari joined the authorship, mostly contributing to the advanced section on non-parametrics, Gaussian processes, EP algorithms. Then the Hamiltonian Monte Carlo methodology and Stan of course, which is now central to Andrew’s interests. The book does include a short Appendix on running computations in R and in Stan. Further novelties were mentioned above, like the vision of weakly informative priors taking over noninformative priors but I think this edition of Bayesian Data Analysis puts more stress on clever and critical model construction and on the fact that it can be done in a Bayesian manner. Hence the insistence on predictive and cross-validation tools. The book may be deemed somewhat short on exercices, providing between 3 and 20 mostly well-developed problems per chapter, often associated with datasets, rather than the less exciting counter-example above. Even though Andrew disagrees and his students at ENSAE this year certainly did not complain, I personally feel a total of 220 exercices is not enough for instructors and self-study readers. (At least, this reduces the number of email requests for solutions! Esp. when 50 of those are solved on the book website.) But this aspect is a minor quip: overall this is truly the reference book for a graduate course on Bayesian statistics and not only Bayesian data analysis.