Archive for statistical inference

inference with Wasserstein distance

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on January 23, 2017 by xi'an

Today, Pierre Jacob posted on arXiv a paper of ours on the use of the Wasserstein distance in statistical inference, which main focus is exploiting this distance to create an automated measure of discrepancy for ABC. Which is why the full title is Inference in generative models using the Wasserstein distance. Generative obviously standing for the case when a model can be generated from but cannot be associated with a closed-form likelihood. We had all together discussed this notion when I visited Harvard and Pierre last March, with much excitement. (While I have not contributed much more than that round of discussions and ideas to the paper, the authors kindly included me!) The paper contains theoretical results for the consistency of statistical inference based on those distances, as well as computational on how the computation of these distances is practically feasible and on how the Hilbert space-filling curve used in sequential quasi-Monte Carlo can help. The notion further extends to dependent data via delay reconstruction and residual reconstruction techniques (as we did for some models in our empirical likelihood BCel paper). I am quite enthusiastic about this approach and look forward discussing it at the 17w5015 BIRS ABC workshop, next month!

running out of explanations

Posted in Books, Kids, Statistics with tags , , , , , on September 23, 2015 by xi'an

A few days ago, I answered a self-study question on Cross Validated about the convergence in probability of 1/X given the convergence in probability of X to a. Until I ran out of explanations… I did not see how to detail any further the connection between both properties! The reader (OP) started from a resolution of the corresponding exercise in Casella and Berger’s Statistical Inference and could not follow the steps, some of which were incorrect. But my attempts at making him uncover the necessary steps failed, presumably because he was sticking to this earlier resolution rather than starting from the definition of convergence in probability. And he could not get over the equality

\mathbb{P}(|a/X_{i} - 1| < \epsilon)=\mathbb{P}\left(a-{{a\epsilon}\over{1 + \epsilon}} < X_{i} < a + {{a\epsilon}\over{1 - \epsilon}}\right)

which is the central reason why one convergence transfers to the other… I know I know nothing, and even less about pedagogy, but it is (just so mildly!) frustrating to hit a wall beyond which no further explanation can help! Feel free to propose an alternative resolution.

Update: A few days later, readers of Cross Validated pointed out that the question had been answered by whuber in a magisterial way. But I wonder if my original reader appreciated this resolution, since he did not pursue the issue.

full Bayesian significance test

Posted in Books, Statistics with tags , , , , , , , , , , on December 18, 2014 by xi'an

Among the many comments (thanks!) I received when posting our Testing via mixture estimation paper came the suggestion to relate this approach to the notion of full Bayesian significance test (FBST) developed by (Julio, not Hal) Stern and Pereira, from São Paulo, Brazil. I thus had a look at this alternative and read the Bayesian Analysis paper they published in 2008, as well as a paper recently published in Logic Journal of IGPL. (I could not find what the IGPL stands for.) The central notion in these papers is the e-value, which provides the posterior probability that the posterior density is larger than the largest posterior density over the null set. This definition bothers me, first because the null set has a measure equal to zero under an absolutely continuous prior (BA, p.82). Hence the posterior density is defined in an arbitrary manner over the null set and the maximum is itself arbitrary. (An issue that invalidates my 1993 version of the Lindley-Jeffreys paradox!) And second because it considers the posterior probability of an event that does not exist a priori, being conditional on the data. This sounds in fact quite similar to Statistical Inference, Murray Aitkin’s (2009) book using a posterior distribution of the likelihood function. With the same drawback of using the data twice. And the other issues discussed in our commentary of the book. (As a side-much-on-the-side remark, the authors incidentally  forgot me when citing our 1992 Annals of Statistics paper about decision theory on accuracy estimators..!)

Nonlinear Time Series just appeared

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , , , , on February 26, 2014 by xi'an

My friends Randal Douc and Éric Moulines just published this new time series book with David Stoffer. (David also wrote Time Series Analysis and its Applications with Robert Shumway a year ago.) The books reflects well on the research of Randal and Éric over the past decade, namely convergence results on Markov chains for validating both inference in nonlinear time series and algorithms applied to those objects. The later includes MCMC, pMCMC, sequential Monte Carlo, particle filters, and the EM algorithm. While I am too close to the authors to write a balanced review for CHANCE (the book is under review by another researcher, before you ask!), I think this is an important book that reflects the state of the art in the rigorous study of those models. Obviously, the mathematical rigour advocated by the authors makes Nonlinear Time Series a rather advanced book (despite the authors’ reassuring statement that “nothing excessively deep is used”) more adequate for PhD students and researchers than starting graduates (and definitely not advised for self-study), but the availability of the R code (on the highly personal page of David Stoffer) comes to balance the mathematical bent of the book in the first and third parts. A great reference book!

Do we need…yes we do (with some delay)!

Posted in Books, Statistics, University life with tags , , , , , , on April 4, 2013 by xi'an

Sometimes, if not that often, I forget about submitted papers to the point of thinking they are already accepted. This happened with the critical analysis of Murray Aitkin’s book Statistical Inference, already debated on the ‘Og, written with Andrew Gelman and Judith Rousseau, and resubmitted to Statistics and Risk Modeling in November…2011. As I had received a few months ago a response to our analysis from Murray, I was under the impression it was published or about to be published. Earlier this week I started looking for the reference in connection with the paper I was completing on the Jeffreys-Lindley paradox and could not find it. Checking emails on that topic I then discovered the latest one was from Novtember 2011 and the editor, when contacted, confirmed the paper was still under review! As it got accepted only a few hours later, my impression is that it had been misfiled and forgotten at some point, an impression reinforced by an earlier experience with the previous avatar of the journal, Statistics & Decisions. In the 1990’s George Casella and I had had a paper submitted to this journal for a while, which eventually got accepted. Then nothing happened for a year and more, until we contacted the editor who acknowledged the paper had been misfiled and forgotten! (This was before the electronic processing of papers, so it is quite plausible that the file corresponding to our accepted paper went under a drawer or into the wrong pile and that the editor was not keeping track of those accepted papers. After all, until Series B turned submission into an all-electronic experience, I was using a text file to keep track of daily submissions…) If you knew George, you can easily imagine his reaction when reading this reply… Anyway, all is well that ends well in that our review and Murray’s reply will appear in Statistics and Risk Modeling, hopefully in a reasonable delay.

CHANCE: special issue on George Casella’s books

Posted in Books, R, Statistics, University life with tags , , , , , , , on February 10, 2013 by xi'an

The special issue of CHANCE on George Casella’s books has now appeared and it contains both my earlier post on George passing away and  reviews of several of his books, as follows:

Although all of those books have appeared between twenty and five years ago, the reviews are definitely worth reading! (Disclaimer: I am the editor of the Books Review section who contacted friends of George to write the reviews, as well as the co-author of two of those books!) They bring in my (therefore biased) opinion a worthy evaluation of the depths and impacts of those major books, and they also reveal why George was a great teacher, bringing much into the classroom and to his students… (Unless I am confused the whole series of reviews is available to all, and not only to CHANCE subscribers. Thanks, Sam!)

estimating a constant

Posted in Books, Statistics with tags , , , , , , , , , on October 3, 2012 by xi'an

Paulo (a.k.a., Zen) posted a comment in StackExchange on Larry Wasserman‘s paradox about Bayesians and likelihoodists (or likelihood-wallahs, to quote Basu!) being unable to solve the problem of estimating the normalising constant c of the sample density, f, known up to a constant

f(x) = c g(x)

(Example 11.10, page 188, of All of Statistics)

My own comment is that, with all due respect to Larry!, I do not see much appeal in this example, esp. as a potential criticism of Bayesians and likelihood-wallahs…. The constant c is known, being equal to

1/\int_\mathcal{X} g(x)\text{d}x

If c is the only “unknown” in the picture, given a sample x1,…,xn, then there is no statistical issue whatsoever about the “problem” and I do not agree with the postulate that there exist estimators of c. Nor priors on c (other than the Dirac mass on the above value). This is not in the least a statistical problem but rather a numerical issue.That the sample x1,…,xn can be (re)used through a (frequentist) density estimate to provide a numerical approximation of c

\hat c = \hat f(x_0) \big/ g(x_0)

is a mere curiosity. Not a criticism of alternative statistical approaches: e.g., I could also use a Bayesian density estimate…

Furthermore, the estimate provided by the sample x1,…,xn is not of particular interest since its precision is imposed by the sample size n (and converging at non-parametric rates, which is not a particularly relevant issue!), while I could use importance sampling (or even numerical integration) if I was truly interested in c. I however find the discussion interesting for many reasons

  1. it somehow relates to the infamous harmonic mean estimator issue, often discussed on the’Og!;
  2. it brings more light on the paradoxical differences between statistics and Monte Carlo methods, in that statistics is usually constrained by the sample while Monte Carlo methods have more freedom in generating samples (up to some budget limits). It does not make sense to speak of estimators in Monte Carlo methods because there is no parameter in the picture, only “unknown” constants. Both fields rely on samples and probability theory, and share many features, but there is nothing like a “best unbiased estimator” in Monte Carlo integration, see the case of the “optimal importance function” leading to a zero variance;
  3. in connection with the previous point, the fascinating Bernoulli factory problem is not a statistical problem because it requires an infinite sequence of Bernoullis to operate;
  4. the discussion induced Chris Sims to contribute to StackExchange!