Archive for Adelaide

adjustment of bias and coverage for confidence intervals

Posted in Statistics with tags , , , , , , , , on October 18, 2012 by xi'an

Menéndez, Fan, Garthwaite, and Sisson—whom I heard in Adelaide on that subject—posted yesterday a paper on arXiv about correcting the frequentist coverage of default intervals toward their nominal level. Given such an interval [L(x),U(x)], the correction for proper frequentist coverage is done by parametric bootstrap, i.e. by simulating n replicas of the original sample from the pluggin density f(.|θ*) and deriving the empirical cdf of L(y)-θ*. And of U(y)-θ*. Under the assumption of consistency of the estimate θ*, this ensures convergence (in the original sampled size) of the corrected bounds.

Since ABC is based on the idea that pseudo data can be simulated from f(.|θ) for any value of θ, the concept “naturally” applies to ABC outcomes, as illustrated in the paper by a g-and-k noise MA(1) model. (As noted by the authors, there always is some uncertainty with the consistency of the ABC estimator.) However, there are a few caveats:

  • ABC usually aims at approximating the posterior distribution (given the summary statistics), of which the credible intervals are an inherent constituent. Hence, attempts at recovering a frequentist coverage seem contradictory with the original purpose of the method. Obviously, if ABC is instead seen as an inference method per se, like indirect inference, this objection does not hold.
  • Then, once the (umbilical) link with Bayesian inference is partly severed, there is no particular reason to stick to credible sets for [L(x),U(x)]. A more standard parametric bootstrap approach, based on the bootstrap distribution of θ*, should work as well. This means that a comparison with other frequentist methods like indirect inference could be relevant.
  • At last, and this is also noted by the authors, the method may prove extremely expensive. If the bounds L(x) and U(x) are obtained empirically from an ABC sample, a new ABC computation must be associated with each one of the n replicas of the original sample. It would be interesting to compare the actual coverages of this ABC-corrected method with a more direct parametric bootstrap approach.

a paradox in decision-theoretic interval estimation (solved)

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 4, 2012 by xi'an

In 1993, we wrote a paper [with George Casella and Gene/Juinn Hwang] on the paradoxical consequences of using the loss function

\text{length}(C) - k \mathbb{I}_C(\theta)

(published in Statistica Sinica, 3, 141-155) since it led to the following property: for the standard normal mean estimation problem, the regular confidence interval is dominated by the modified confidence interval equal to the empty set when is too large… This was first pointed out by Jim Berger and the most natural culprit is the artificial loss function where the first part is unbounded while the second part is bounded by k. Recently, Paul Kabaila—whom I met in both Adelaide, where he quite appropriately commented about the abnormal talk at the conference!,  and Melbourne, where we met with his students after my seminar at the University of Melbourne—published a paper (first on arXiv then in Statistics and Probability Letters) where he demonstrates that the mere modification of the above loss into

\dfrac{\text{length}(C)}{\sigma} - k \mathbb{I}_C(\theta)

solves the paradox:! For Jeffreys’ non-informative prior, the Bayes (optimal) estimate is the regular confidence interval. besides doing the trick, this nice resolution explains the earlier paradox as being linked to a lack of invariance in the (earlier) loss function. This is somehow satisfactory since Jeffreys’ prior also is the invariant prior in this case.

ASC 2012 (#3, also available by mind-reading)

Posted in Running, Statistics, University life with tags , , , , , , , , , on July 13, 2012 by xi'an

This final morning at the ASC 2012 conference in Adelaide, I attended a keynote lecture by Sophia Rabe-Hesketh on GLMs that I particularly appreciated, as I am quite fond of those polymorphous and highly adaptable models (witness the rich variety of applications at the INLA conference in Trondheim last month). I then gave my talk on ABC model choice, trying to cover the three episodes in the series within the allocated 40 minutes (and got from Terry Speed the trivia information that Renfrey Potts, father to the Potts model, spent most of his life in Adelaide, where he died in 2005! Terry added that he used to run along the Torrens river, being a dedicated marathon runner. This makes Adelaide the death place of both R.A. Fisher and R. Potts.)

Later in the morning, Christl Donnelly  gave a fascinating talk on her experiences with government bodies during the BSE and foot-and-mouth epidemics in Britain in the past decades. It was followed by  a frankly puzzling [keynote Ozcots] talk delivered by Jessica Utts on the issue of parapsychology tests, i.e. the analysis of experiments testing for “psychic powers”. Nothing less. Actually, I first thought this was a pedagogical trick to capture the attention of students and debunk, however Utts’ focus on exhibiting such “powers” was definitely dead serious and she concluded that “psychic functioning appears to be a real effect”. So it came as a shock that she was truly believing in psychic paranormal abilities! I had been under the wrong impression that the 2005 Statistical Science paper of hers was demonstrating the opposite but it clearly belongs to the tradition of controversial Statistical Science that started with the Bible code paper… I also found it flabbergasting to learn that the U.S. Army is/was funding research in this area and is/was actually employing “psychics”, as well that the University of Edinburgh has a parapsychology unit within the department of psychology. (But, after all, UK universities also have long had schools of Divinity, so let the irrational in a while ago!) Continue reading

ACS 2012 (#2)

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on July 12, 2012 by xi'an

This morning, after a nice and cool run along the river Torrens amidst almost unceasing bird songs, I attended another Bayesian ASC 2012 session with Scott Sisson presenting a simulation method aimed at correcting for biased confidence intervals and Robert Kohn giving the same talk in Kyoto. Scott’s proposal, which is rather similar to parametric bootstrap bias correction, is actually more frequentist than Bayesian as the bias is defined in terms of an correct frequentist coverage of a given confidence (or credible) interval. (Thus making the connection with Roderick Little’s calibrated Bayes talk of yesterday.) This perspective thus perceives ABC as a particular inferential method, instead of a computational approximation to the genuine Bayesian object. (We will certainly discuss the issue with Scott next week in Sydney.)

Then Peter Donnely gave a particularly exciting and well-attended talk on the geographic classification of humans, in particular of the (early 1900′s) population of the British isles, based on a clever clustering idea derived from an earlier paper of Na Li and Matthew Stephens: using genetic sequences from a group of individuals, each individual was paired with the rest of the sample as if it descended from this population. Using an HMM model, this led to clustering the sample into about 50 groups, with a remarkable geographic homogeneity: for instance, Cornwall and Devon made two distinct groups, an English speaking pocket of Wales (Little England) was identified as a specific group and so on, the central, eastern and southern England constituting an homogenous group of its own…

ASC 2012 (#1)

Posted in Statistics, Travel, University life with tags , , , , , , , , , , on July 11, 2012 by xi'an

This morning I attended Alan Gelfand talk on directional data, i.e. on the torus (0,2π), and found his modeling via wrapped normals (i.e. normal reprojected onto the unit sphere) quite interesting and raising lots of probabilistic questions. For instance, usual moments like mean and variance had no meaning in this space. The variance matrix of the underlying normal, as well of its mean, obviously matter. One thing I am wondering about is how restrictive the normal assumption is. Because of the projection, any random change to the scale of the normal vector does not impact this wrapped normal distribution but there are certainly features that are not covered by this family. For instance, I suspect it can only offer at most two modes over the range (0,2π). And that it cannot be explosive at any point.

The keynote lecture this afternoon was delivered by Roderick Little in a highly entertaining way, about calibrated Bayesian inference in official statistics. For instance, he mentioned the inferential “schizophrenia” in this field due to the between design-based and model-based inferences. Although he did not define what he meant by “calibrated Bayesian” in the most explicit manner, he had this nice list of eight good reasons to be Bayesian (that came close to my own list at the end of the Bayesian Choice):

  1. conceptual simplicity (Bayes is prescriptive, frequentism is not), “having a model is an advantage!”
  2. avoiding ancillarity angst (Bayes conditions on everything)
  3. avoiding confidence cons (confidence is not probability)
  4. nails nuisance parameters (frequentists are either wrong or have a really hard time)
  5. escapes from asymptotia
  6. incorporates prior information and if not weak priors work fine
  7. Bayes is useful (25 of the top 30 cited are statisticians out of which … are Bayesians)
  8. Bayesians go to Valencia! [joke! Actually it should have been Bayesian go MCMskiing!]
  9. Calibrated Bayes gets better frequentists answers

He however insisted that frequentists should be Bayesians and also that Bayesians should be frequentists, hence the calibration qualification.

After an interesting session on Bayesian statistics, with (adaptive or not) mixtures and variational Bayes tools, I actually joined the “young statistician dinner” (without any pretense at being a young statistician, obviously) and had interesting exchanges on a whole variety of topics, esp. as Kerrie Mengersen adopted (reinvented) my dinner table switch strategy (w/o my R simulated annealing code). Until jetlag caught up with me.


Get every new post delivered to your Inbox.

Join 551 other followers