Archive for Valencia conferences

Cancun, ISBA 2014 [½ day #2]

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , on July 19, 2014 by xi'an

Cancun12

Half-day #2 indeed at ISBA 2014, as the Wednesday afternoon kept to the Valencia tradition of free time, and potential cultural excursions, so there were only talks in the morning. And still the core poster session at (late) night. In which my student Kaniav Kamari presented a poster on a current project we are running with Kerrie Mengersen and Judith Rousseau on the replacement of the standard Bayesian testing setting with a mixture representation. Being half-asleep by the time the session started, I did not stay long enough to collect data on the reactions to this proposal, but the paper should be arXived pretty soon. And Kate Lee gave a poster on our importance sampler for evidence approximation in mixtures (soon to be revised!). There was also an interesting poster about reparameterisation towards higher efficiency of MCMC algorithms, intersecting with my long-going interest in the matter, although I cannot find a mention of it in the abstracts. And I had a nice talk with Eduardo Gutierrez-Pena about infering on credible intervals through loss functions. There were also a couple of appealing posters on g-priors. Except I was sleepwalking by the time I spotted them… (My conference sleeping pattern does not work that well for ISBA meetings! Thankfully, both next editions will be in Europe.)

Great talk by Steve McEachern that linked to our ABC work on Bayesian model choice with insufficient statistics, arguing towards robustification of Bayesian inference by only using summary statistics. Despite this being “against the hubris of Bayes”… Obviously, the talk just gave a flavour of Steve’s perspective on that topic and I hope I can read more to see how we agree (or not!) on this notion of using insufficient summaries to conduct inference rather than trying to model “the whole world”, given the mistrust we must preserve about models and likelihoods. And another great talk by Ioanna Manolopoulou on another of my pet topics, capture-recapture, although she phrased it as a partly identified model (as in Kline’s talk yesterday). This related with capture-recapture in that when estimating a capture-recapture model with covariates, sampling and inference are biased as well. I appreciated particularly the use of BART to analyse the bias in the modelling. And the talk provided a nice counterpoint to the rather pessimistic approach of Kline’s.

Terrific plenary sessions as well, from Wilke’s spatio-temporal models (in the spirit of his superb book with Noel Cressie) to Igor Prunster’s great entry on Gibbs process priors. With the highly significant conclusion that those processes are best suited for (in the sense that they are only consistent for) discrete support distributions. Alternatives are to be used for continuous support distributions, the special case of a Dirichlet prior constituting a sort of unique counter-example. Quite an inspiring talk (even though I had a few micro-naps throughout it!).

I shared my afternoon free time between discussing the next O’Bayes meeting (2015 is getting very close!) with friends from the Objective Bayes section, getting a quick look at the Museo Maya de Cancún (terrific building!), and getting some work done (thanks to the lack of wireless…)

Cancún, ISBA 2014 [day #0]

Posted in Statistics, Travel, University life with tags , , , , , , , , on July 17, 2014 by xi'an

Day zero at ISBA 2014! The relentless heat outside (making running an ordeal, even at 5:30am…) made the (air-conditioned) conference centre the more attractive. Jean-Michel Marin and I had a great morning teaching our ABC short course and we do hope the ABC class audience had one as well. Teaching in pair is much more enjoyable than single as we can interact with one another as well as the audience. And realising unsuspected difficulties with the material is much easier this way, as the (mostly) passive instructor can spot the class’ reactions. This reminded me of the course we taught together in Oulu, northern Finland, in 2004 and that ended as the Bayesian Core. We did not cover the entire material we have prepared for this short course, but I think the pace was the right one. (Just tell me otherwise if you were there!) This was also the only time I had given a course wearing sunglasses, thanks to yesterday’s incident!

Waiting for a Spanish speaking friend to kindly drive with me downtown Cancún to check whether or not an optician could make me new prescription glasses, I attended Jim Berger’s foundational lecture on frequentist properties of Bayesian procedures but could only listen as the slides were impossible for me to read, with or without glasses. The partial overlap with the Varanasi lecture helped. I alas had to skip both Gareth Roberts’ and Sylvia Früwirth-Schnatter’s lectures, apologies to both of them!, but the reward was to get a new pair of prescription glasses within a few hours. Perfectly suited to my vision! And to get back just in time to read slides during Peter Müller’s lecture from the back row! Thanks to my friend Sophie for her negotiating skills! Actually, I am still amazed at getting glasses that quickly, given the time it would have taken in, e.g., France. All set for another 15 years with the same pair?! Only if I do not go swimming with them in anything but a quiet swimming pool!

The starting dinner happened to coincide with the (second) ISBA Fellow Award ceremony. Jim acted as the grand master of ceremony and he did great to add life and side stories to the written nominations for each and everyone of the new Fellows. The Fellowships honoured Bayesian statisticians who had contributed to the field as researchers and to the society since its creation. I thus feel very honoured (and absolutely undeserving) to be included in this prestigious list, along with many friends.  (But would have loved to see two more former ISBA presidents included, esp. for their massive contribution to Bayesian theory and methodology…) And also glad to wear regular glasses instead of my morning sunglasses.

[My Internet connection during the meeting being abysmally poor, the posts will appear with some major delay! In particular, I cannot include new pictures at times I get a connection... Hence a picture of northern Finland instead of Cancún at the top of this post!]

Jeffreys prior with improper posterior

Posted in Books, Statistics, University life with tags , , , , , , , , , , on May 12, 2014 by xi'an

In a complete coincidence with my visit to Warwick this week, I became aware of the paper “Inference in two-piece location-scale models with Jeffreys priors” recently published in Bayesian Analysis by Francisco Rubio and Mark Steel, both from Warwick. Paper where they exhibit a closed-form Jeffreys prior for the skewed distribution

\dfrac{2\epsilon}{\sigma_1}f(\{x-\mu\}/\sigma_1)\mathbb{I}_{x<\mu}+\dfrac{2(1-\epsilon)}{\sigma_2}f(\{x-\mu\}/\sigma_2) \mathbb{I}_{x>\mu}

where f is a symmetric density, namely

\pi(\mu,\sigma_1,\sigma_2) \propto 1 \big/ \sigma_1\sigma_2\{\sigma_1+\sigma_2\}\,,

where

\epsilon=\sigma_1/\{\sigma_1+\sigma_2\}\,.

only to show  immediately after that this prior does not allow for a proper posterior, no matter what the sample size is. While the above skewed distribution can always be interpreted as a mixture, being a weighted sum of two terms, it is not strictly speaking a mixture, if only because the “component” can be identified from the observation (depending on which side of μ is stands). The likelihood is therefore a product of simple terms rather than a product of a sum of two terms.

As a solution to this conundrum, the authors consider the alternative of the “independent Jeffreys priors”, which are made of a product of conditional Jeffreys priors, i.e., by computing the Jeffreys prior one parameter at a time with all other parameters considered to be fixed. Which differs from the reference prior, of course, but would have been my second choice as well. Despite criticisms expressed by José Bernardo in the discussion of the paper… The difficulty (in my opinion) resides in the choice (and difficulty) of the parameterisation of the model, since those priors are not parameterisation-invariant. (Xinyi Xu makes the important comment that even those priors incorporate strong if hidden information. Which relates to our earlier discussion with Kaniav Kamari on the “dangers” of prior modelling.)

Although the outcome is puzzling, I remain just slightly sceptical of the income, namely Jeffreys prior and the corresponding Fisher information: the fact that the density involves an indicator function and is thus discontinuous in the location μ at the observation x makes the likelihood function not differentiable and hence the derivation of the Fisher information not strictly valid. Since the indicator part cannot be differentiated. Not that I am seeing the Jeffreys prior as the ultimate grail for non-informative priors, far from it, but there is definitely something specific in the discontinuity in the density. (In connection with the later point, Weiss and Suchard deliver a highly critical commentary on the non-need for reference priors and the preference given to a non-parametric Bayes primary analysis. Maybe making the point towards a greater convergence of the two perspectives, objective Bayes and non-parametric Bayes.)

This paper and the ensuing discussion about the properness of the Jeffreys posterior reminded me of our earliest paper on the topic with Jean Diebolt. Where we used improper priors on location and scale parameters but prohibited allocations (in the Gibbs sampler) that would lead to less than two observations per components, thereby ensuring that the (truncated) posterior was well-defined. (This feature also remained in the Series B paper, submitted at the same time, namely mid-1990, but only published in 1994!)  Larry Wasserman proved ten years later that this truncation led to consistent estimators, but I had not thought about it in very long while. I still like this notion of forcing some (enough) datapoints into each component for an allocation (of the latent indicator variables) to be an acceptable Gibbs move. This is obviously not compatible with the iid representation of a mixture model, but it expresses the requirement that components all have a meaning in terms of the data, namely that all components contributed to generating a part of the data. This translates as a form of weak prior information on how much we trust the model and how meaningful each component is (in opposition to adding meaningless extra-components with almost zero weights or almost identical parameters).

As a marginalia, the insistence in Rubio and Steel’s paper that all observations in the sample be different also reminded me of a discussion I wrote for one of the Valencia proceedings (Valencia 6 in 1998) where Mark presented a paper with Carmen Fernández on this issue of handling duplicated observations modelled by absolutely continuous distributions. (I am afraid my discussion is not worth the $250 price tag given by amazon!)

reading classics (#4,5,6)

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , , , on December 9, 2013 by xi'an

La Défense from Paris-Dauphine, Nov. 15, 2012This week, thanks to a lack of clear instructions (from me) to my students in the Reading Classics student seminar, four students showed up with a presentation! Since I had planned for two teaching blocks, three of them managed to fit within the three hours, while the last one nicely accepted to wait till next week to present a paper by David Cox…

The first paper discussed therein was A new look at the statistical model identification, written in 1974 by Hirotugu Akaike. And presenting the AIC criterion. My student Rozan asked to give the presentation in French as he struggled with English, but it was still a challenge for him and he ended up being too close to the paper to provide a proper perspective on why AIC is written the way it is and why it is (potentially) relevant for model selection. And why it is not such a definitive answer to the model selection problem. This is not the simplest paper in the list, to be sure, but some intuition could have been built from the linear model, rather than producing the case of an ARMA(p,q) model without much explanation. (I actually wonder why the penalty for this model is (p+q)/T, rather than (p+q+1)/T for the additional variance parameter.) Or simulation ran on the performances of AIC versus other xIC’s…

The second paper was another classic, the original GLM paper by John Nelder and his coauthor Wedderburn, published in 1972 in Series B. A slightly easier paper, in that the notion of a generalised linear model is presented therein, with mathematical properties linking the (conditional) mean of the observation with the parameters and several examples that could be discussed. Plus having the book as a backup. My student Ysé did a reasonable job in presenting the concepts, but she would have benefited from this extra-week in including properly the computations she ran in R around the glm() function… (The definition of the deviance was somehow deficient, although this led to a small discussion during the class as to how the analysis of deviance was extending the then flourishing analysis of variance.) In the generic definition of the generalised linear models, I was also reminded of the
generality of the nuisance parameter modelling, which made the part of interest appear as an exponential shift on the original (nuisance) density.

The third paper, presented by Bong, was yet another classic, namely the FDR paper, Controlling the false discovery rate, of Benjamini and Hochberg in Series B (which was recently promoted to the should-have-been-a-Read-Paper category by the RSS Research Committee and discussed at the Annual RSS Conference in Edinburgh four years ago, as well as published in Series B). This 2010 discussion would actually have been a good start to discuss the paper in class, but Bong was not aware of it and mentioned earlier papers extending the 1995 classic. She gave a decent presentation of the problem and of the solution of Benjamini and Hochberg but I wonder how much of the novelty of the concept the class grasped. (I presume everyone was getting tired by then as I was the only one asking questions.) The slides somewhat made it look too much like a simulation experiment… (Unsurprisingly, the presentation did not include any Bayesian perspective on the approach, even though they are quite natural and emerged very quickly once the paper was published. I remember for instance the Valencia 7 meeting in Teneriffe where Larry Wasserman discussed about the Bayesian-frequentist agreement in multiple testing.)

simulating determinantal processes

Posted in Statistics, Travel with tags , , , , , , , , , , on December 6, 2013 by xi'an

In the plane to Atlanta, I happened to read a paper called Efficient simulation of the Ginibre point process by Laurent Decreusefond, Ian Flint, and Anaïs Vergne (from Telecom Paristech). “Happened to” as it was a conjunction of getting tipped by my new Dauphine colleague (and fellow blogger!) Djalil Chaffaï about the paper, having downloaded it prior to departure, and being stuck in a plane (after watching the only Chinese [somewhat] fantasy movie onboard, Saving General Yang).

This is mostly a mathematics paper. While indeed a large chunk of it is concerned with the rigorous definition of this point process in an abstract space, the last part is about simulating such processes. They are called determinantal (and not detrimental as I was tempted to interpret on my first read!) because the density of an n-set (x1x2,…,xn) is given by a kind of generalised Vandermonde determinant

p(x_1,\ldots,x_n) = \dfrac{1}{n!} \text{det} \left( T(x_i,x_j) \right)

where T is defined in terms of an orthonormal family,

T(x,y) = \sum_{i=1}^n \psi_i(x) \overline{\psi_i(y)}.

(The number n of points can be simulated via an a.s. finite Bernoulli process.) Because of this representation, the sequence of conditional densities for the xi‘s (i.e. x1, x2 given x1, etc.) can be found in closed form. In the special case of the Ginibre process, the ψi‘s are of the form

\psi_i(z) =z^m \exp\{-|z|^2/2\}/\sqrt{\pi m!}

and the process cannot be simulated for it has infinite mass, hence an a.s. infinite number of points. Somehow surprisingly (as I thought this was the point of the paper), the authors then switch to a truncated version of the process that always has a fixed number N of points. And whose density has the closed form

p(x_1,\ldots,x_n) = \dfrac{1}{\pi^N} \prod_i \frac{1}{i!} \exp\{-|z_i|^2/2\}\prod_{i<j} |z_i-z_j|^2

It has an interestingly repulsive quality in that points cannot get close to one another. (It reminded me of the pinball sampler proposed by Kerrie Mengersen and myself at one of the Valencia meetings and not pursued since.) The conclusion (of this section) is anticlimactic, though,  in that it is known that this density also corresponds to the distribution of the eigenvalues of an Hermitian matrix with standardized complex Gaussian entries. The authors mentions that the fact that the support is the whole complex space Cn is a difficulty, although I do not see why.

The following sections of the paper move to the Ginibre process restricted to a compact and then to the truncated Ginibre process restricted to a compact, for which the authors develop corresponding simulation algorithms. There is however a drag in that the sequence of conditionals, while available in closed-form, cannot be simulated efficiently but rely on a uniform accept-reject instead. While I am certainly missing most of the points in the paper, I wonder if a Gibbs sampler would not be an interesting alternative given that the full (last) conditional is a Gaussian density…

Follow

Get every new post delivered to your Inbox.

Join 598 other followers