Archive for tutorial

don’t be late for BayesComp’2020

Posted in Statistics with tags , , , , , , , , , , , , , on October 4, 2019 by xi'an

An important reminder that October 14 is the deadline for regular registration to BayesComp 2020 as late fees will apply afterwards!!! The conference looks attractive enough to agree to pay more, but still…

Hausdorff school on MCMC [28 March-02 April, 2020]

Posted in pictures, Statistics, Travel with tags , , , , , , , , , , , , , on September 26, 2019 by xi'an

The Hausdorff Centre for Mathematics will hold a week on recent advances in MCMC in Bonn, Germany, March 30 – April 3, 2020. Preceded by two days of tutorials. (“These tutorials will introduce basic MCMC methods and mathematical tools for studying the convergence to the invariant measure.”) There is travel support available, but the application deadline is quite close, as of 30 September.

Note that, in a Spring of German conference, the SIAM Conference on Uncertainty Quantification will take place in Munich (Garching) the week before, on March 24-27. With at least one likelihood-free session. Not to mention the ABC in Grenoble workshop in France, on 19-20 March. (Although these places are not exactly nearby!)

deadlines for BayesComp’2020

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on August 17, 2019 by xi'an

While I have forgotten to send a reminder that August 15 was the first deadline of BayesComp 2020 for the early registrations, here are further deadlines and dates

  1. BayesComp 2020 occurs on January 7-10 2020 in Gainesville, Florida, USA
  2. Registration is open with regular rates till October 14, 2019
  3. Deadline for submission of poster proposals is December 15, 2019
  4. Deadline for travel support applications is September 20, 2019
  5. There are four free tutorials on January 7, 2020, related with Stan, NIMBLE, SAS, and AutoStat

BayesComp 20 [full program]

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , on April 15, 2019 by xi'an

The full program is now available on the conference webpage of BayesComp 20, next 7-10 Jan 2020. There are eleven invited sessions, including one j-ISBA session, and a further thirteen contributed sessions were selected by the scientific committee. Calls are still open for tutorials on Tuesday 07 January (with two already planed on Nimble and AutoStat) and for posters. Now is the best time for registering! Note also that travel support should be available for junior researchers.

a Ca’Foscari [first Italian-French statistics seminar]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on October 26, 2017 by xi'an

Apart from subjecting my [surprisingly large!] audience to three hours of ABC tutorial today, and after running Ponte della la Libertà to Mestre and back in a deep fog, I attended the second part of the 1st Italian-French statistics seminar at Ca’Foscari, Venetiarum Universitas, with talks by Stéfano Tonellato and Roberto Casarin. Stéfano discussed a most interesting if puzzling notion of clustering via Dirichlet process mixtures. Which indeed puzzles me for its dependence on the Dirichlet measure and on the potential for an unlimited number of clusters as the sample size increases. The method offers similarities with an approach from our 2000 JASA paper on running inference on mixtures without proper label switching, in that looking at pairs of allocated observations to clusters is revealing about the [true or pseudo-true] number of clusters. With divergence in using eigenvalues of Laplacians on similarity matrices. But because of the potential for the number of components to diverge I wonder at the robustness of the approach via non-parametric [Bayesian] modelling. Maybe my difficulty stands with the very notion of cluster, which I find poorly defined and mostly in the eyes of the beholder! And Roberto presented a recent work on SURE and VAR models, with a great graphical representation of the estimated connections between factors in a sparse graphical model.

back to ca’ Foscari, Venezia

Posted in Books, pictures, Statistics, Travel, University life, Wines with tags , , , , , , on October 16, 2017 by xi'an

I am off to Venezia this afternoon for a Franco-Italian workshop organised by my friends Monica Billio, Roberto Casarin, and Matteo Iacopini, from the Department of Economics of Ca’ Foscari, almost exactly a year after my previous trip there for ESOBE 2016. (Except that this was before!) Tomorrow, I will give both a tutorial [for the second time in two weeks!] and a talk on ABC, hopefully with some portion of the audience still there for the second part!

Bayes’ Rule [book review]

Posted in Books, Statistics, University life with tags , , , , , , , , , , on July 10, 2014 by xi'an

This introduction to Bayesian Analysis, Bayes’ Rule, was written by James Stone from the University of Sheffield, who contacted CHANCE suggesting a review of his book. I thus bought it from amazon to check the contents. And write a review.

First, the format of the book. It is a short paper of 127 pages, plus 40 pages of glossary, appendices, references and index. I eventually found the name of the publisher, Sebtel Press, but for a while thought the book was self-produced. While the LaTeX output is fine and the (Matlab) graphs readable, pictures are not of the best quality and the display editing is minimal in that there are several huge white spaces between pages. Nothing major there, obviously, it simply makes the book look like course notes, but this is in no way detrimental to its potential appeal. (I will not comment on the numerous appearances of Bayes’ alleged portrait in the book.)

“… (on average) the adjusted value θMAP is more accurate than θMLE.” (p.82)

Bayes’ Rule has the interesting feature that, in the very first chapter, after spending a rather long time on Bayes’ formula, it introduces Bayes factors (p.15).  With the somewhat confusing choice of calling the prior probabilities of hypotheses marginal probabilities. Even though they are indeed marginal given the joint, marginal is usually reserved for the sample, as in marginal likelihood. Before returning to more (binary) applications of Bayes’ formula for the rest of the chapter. The second chapter is about probability theory, which means here introducing the three axioms of probability and discussing geometric interpretations of those axioms and Bayes’ rule. Chapter 3 moves to the case of discrete random variables with more than two values, i.e. contingency tables, on which the range of probability distributions is (re-)defined and produces a new entry to Bayes’ rule. And to the MAP. Given this pattern, it is not surprising that Chapter 4 does the same for continuous parameters. The parameter of a coin flip.  This allows for discussion of uniform and reference priors. Including maximum entropy priors à la Jaynes. And bootstrap samples presented as approximating the posterior distribution under the “fairest prior”. And even two pages on standard loss functions. This chapter is followed by a short chapter dedicated to estimating a normal mean, then another short one on exploring the notion of a continuous joint (Gaussian) density.

“To some people the word Bayesian is like a red rag to a bull.” (p.119)

Bayes’ Rule concludes with a chapter entitled Bayesian wars. A rather surprising choice, given the intended audience. Which is rather bound to confuse this audience… The first part is about probabilistic ways of representing information, leading to subjective probability. The discussion goes on for a few pages to justify the use of priors but I find completely unfair the argument that because Bayes’ rule is a mathematical theorem, it “has been proven to be true”. It is indeed a maths theorem, however that does not imply that any inference based on this theorem is correct!  (A surprising parallel is Kadane’s Principles of Uncertainty with its anti-objective final chapter.)

All in all, I remain puzzled after reading Bayes’ Rule. Puzzled by the intended audience, as contrary to other books I recently reviewed, the author does not shy away from mathematical notations and concepts, even though he proceeds quite gently through the basics of probability. Therefore, potential readers need some modicum of mathematical background that some students may miss (although it actually corresponds to what my kids would have learned in high school). It could thus constitute a soft entry to Bayesian concepts, before taking a formal course on Bayesian analysis. Hence doing no harm to the perception of the field.