Archive for likelihood ratio

dodging bullets, IEDs, and fingerprint detection at SimStat19

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , on September 10, 2019 by xi'an

I attended a fairly interesting forensic science session at SimStat 2019 in Salzburg as it concentrated on evidence and measures of evidence rather than on strict applications of Bayesian methodology to forensic problems. Even though American administrations like the FBI or various police departments were involved. It was a highly coherent session and I had a pleasant discussion with some of the speakers after the session. For instance, my friend Alicia Carriquiry presented an approach to determined from images of bullets whether or not they have been fired from the same gun, leading to an interesting case for a point null hypothesis where the point null makes complete sense. The work has been published in Annals of Applied Statistics and is used in practice. The second talk by Danica Ommen on fiducial forensics on IED, asking whether or not copper wires used in the bombs are the same, which is another point null illustration. Which also set an interesting questioning on the dependence of the alternative prior on the distribution of material chosen as it is supposed to cover all possible origins for the disputed item. But more interestingly this talk launched into a discussion of making decision based on finite samplers and unknown parameters, not that specific to forensics, with a definitely surprising representation of the Bayes factor as an expected likelihood ratio which made me first reminiscent of Aitkin’s (1991) infamous posterior likelihood (!) before it dawned on me this was a form of bridge sampling identity where the likelihood ratio only involved parameters common to both models, making it an expression well-defined under both models. This identity could be generalised to the general case by considering a ratio of integrated likelihoods, the extreme case being the ratio equal to the Bayes factor itself. The following two talks by Larry Tang and Christopher Saunders were also focused on the likelihood ratio and their statistical estimates, debating the coherence of using a score function and presenting a functional ABC algorithm where the prior is a Dirichlet (functional) prior. Thus a definitely relevant session from a Bayesian perspective!

 

a paradox about likelihood ratios?

Posted in Books, pictures, Statistics, University life with tags , , , , , , , on January 15, 2018 by xi'an

Aware of my fascination for paradoxes (and heterodox publications), Ewan Cameron sent me the link to a recent arXival by Louis Lyons (Oxford) on different asymptotic distributions of the likelihood ratio. Which is full of approximations. The overall point of the note is hard to fathom… Unless it simply plans to illustrate Betteridge’s law of headlines, as suggested by Ewan.

For instance, the limiting distribution of the log-likelihood of an exponential sample at the true value of the parameter τ is not asymptotically Gaussian but almost surely infinite. While the log of the (Wilks) likelihood ratio at the true value of τ is truly (if asymptotically) a Χ² variable with one degree of freedom. That it is not a Gaussian is deemed a “paradox” by the author, explained by a cancellation of first order terms… Same thing again for the common Gaussian mean problem!

two, three, five, …, a million standard deviations!

Posted in Books, Statistics, University life with tags , , , , , , , on September 26, 2014 by xi'an

I first spotted Peter Coles’ great post title “Frequentism: the art of probably answering the wrong question” (a very sensible piece by the way!, and mentioning a physicist’s view on the Jeffreys-Lindley paradox I had intended to comment) and from there the following site jumping occured:

“I confess that in my early in my career as a physicist I was rather cynical about sophisticated statistical tools, being of the opinion that “if any of this makes a difference, just get more data”. That is, if you do enough experiments, the confidence level will be so high that the exact statistical treatment you use to evaluate it is irrelevant.” John Butterworth, Sept. 15, 2014

After Val Johnson‘s suggestion to move the significant level from .05 down to .005, hence roughly from 2σ up to 3σ, John Butterworth, a physicist whose book Smashing Physics just came out, discusses in The Guardian the practice of using 5σ in Physics. It is actually induced by Louis Lyons’ arXival of a recent talk with the following points (discussed below):

  1. Should we insist on the 5 sigma criterion for discovery claims?
  2. The probability of A, given B, is not the same as the probability of B, given A.
  3. The meaning of p-values.
  4. What is Wilks Theorem and when does it not apply?
  5. How should we deal with the `Look Elsewhere Effect’?
  6. Dealing with systematics such as background parametrisation.
  7. Coverage: What is it and does my method have the correct coverage?
  8. The use of p0 versus p1 plots.

Continue reading

who’s afraid of the big B wolf?

Posted in Books, Statistics, University life with tags , , , , , , , , , , on March 13, 2013 by xi'an

Aris Spanos just published a paper entitled “Who should be afraid of the Jeffreys-Lindley paradox?” in the journal Philosophy of Science. This piece is a continuation of the debate about frequentist versus llikelihoodist versus Bayesian (should it be Bayesianist?! or Laplacist?!) testing approaches, exposed in Mayo and Spanos’ Error and Inference, and discussed in several posts of the ‘Og. I started reading the paper in conjunction with a paper I am currently writing for a special volume in  honour of Dennis Lindley, paper that I will discuss later on the ‘Og…

“…the postdata severity evaluation (…) addresses the key problem with Fisherian p-values in the sense that the severity evaluation provides the “magnitude” of the warranted discrepancy from the null by taking into account the generic capacity of the test (that includes n) in question as it relates to the observed data”(p.88)

First, the antagonistic style of the paper is reminding me of Spanos’ previous works in that it relies on repeated value judgements (such as “Bayesian charge”, “blatant misinterpretation”, “Bayesian allegations that have undermined the credibility of frequentist statistics”, “both approaches are far from immune to fallacious interpretations”, “only crude rules of thumbs”, &tc.) and rhetorical sleights of hand. (See, e.g., “In contrast, the severity account ensures learning from data by employing trustworthy evidence (…), the reliability of evidence being calibrated in terms of the relevant error probabilities” [my stress].) Connectedly, Spanos often resorts to an unusual [at least for statisticians] vocabulary that amounts to newspeak. Here are some illustrations: “summoning the generic capacity of the test”, ‘substantively significant”, “custom tailoring the generic capacity of the test”, “the fallacy of acceptance”, “the relevance of the generic capacity of the particular test”, yes the term “generic capacity” is occurring there with a truly high frequency. Continue reading

Statistical Inference

Posted in Books, Statistics, University life with tags , , , , , , , , , on November 16, 2010 by xi'an

Following the publication of several papers on the topic of integrated evidence (about competing models), Murray Aitkin has now published a book entitled Statistical Inference and I have now finished reading it. While I appreciate the effort made by Murray Aitkin to place his theory within a coherent Bayesian framework, I remain unconvinced of the said coherence, for reasons exposed below.

The main chapters of the book are Chapter 2 about the “Integrated Bayes/likelihood approach” and Chapter 4 about the “Unified analysis of finite populations”, Chapter 7 also containing a new proposal about “Goodness of fit and model diagnostics”. Chapter 1 is a nice introduction to frequentist, likelihood and Bayesian approaches to inference and the four remaining chapters are applications of Murray Aitkin‘s principles to various models.  The style of the book is quite pleasant although slightly discursive in what I (a Frenchman!) would qualify as an English style in that it is often relying on intuition to develop concepts. I also think that the argument of being close to the frequentist decision (aka the p-value) too often serves as a justification in the book (see, e.g., page 43 “the p-value has a direct interpretation as a posterior probability”). As an aside, Murray Aitkin is a strong believer in plotting cdfs rather than densities to provide information about a distribution and hence cdf plots abound throughout the book.  (I counted 82 pictures of them.) While the book contains a helpful array of examples and datasets, the captions of the (many) figures are too terse for my taste: The figures are certainly not self-contained and even with the help of the main text they do not always make complete sense. Continue reading

%d bloggers like this: