Is non-informative Bayesian analysis dangerous for wildlife???

Subhash Lele recently arXived a short paper entitled “Is non-informative Bayesian analysis appropriate for wildlife management: survival of San Joaquin Kit fox and declines in amphibian populations”. (Lele has been mentioned several times on this blog in connection with his data-cloning approach that mostly clones our own SAME algorithm.)

“The most commonly used non-informative priors are either the uniform priors or the priors with very large variances spreading the probability mass almost uniformly over the entire parameter space.”

The main goal of the paper is to warn, even better “to disabuse the ecologists of the notion that there is no difference between non-informative Bayesian inference and likelihood-based inference and that the philosophical underpinnings of statistical inference are irrelevant to practice.” The argument advanced by Lele is simply that two different parametrisations should lead to two compatible priors and that, if they do not not, this exhibits an unacceptable impact of the prior modelling on the resulting inference, while likelihood-based inference [obviously] does not depend on parametrisation.

The first example in the paper is a dynamic linear model of a fox population series when using a uniform U(0,1) prior on a parameter b against a Ga(100,100) prior on -a/b. (The normal prior a is the same on both.) I do not find the opposition between the two posteriors in the least surprising as the modelling starts by assuming different supports on the parameter b. And both are highly “informative” in that there is no intrinsic constraint on b that could justify the (0,1) support, as illustrated by the second choice when b is unconstrained, varying on (-15,15) or (-0.0015,0.0015) depending on how the Ga(100,100) prior is parametrised.

leleThe second model is even simpler as it involves one Bernoulli probability p for the observations, plus a second Bernoulli driving replicates when the first Bernoulli variate is one, i.e.,

Y_i\sim \mathfrak{B}(p)\qquad O_{ij}|Y_i=1\sim \mathfrak{B}(q)

and the paper opposes a uniform prior on p,q to a normal N(0,10^3) prior on the logit transforms of p and q. [With an obvious typo at the top of page 10.] As shown on the above graph, the two priors on p are immensely different, so should lead to different posteriors in a weakly informative setting as a Bernoulli experiment. Even with a few hundred individuals. A somewhat funny aspect of this study is that Lele opposes the uniform prior to the Jeffreys Be(.5,.5) prior as being “nowhere close to looking like what one would consider a non-informative prior”, without noticing that the logit parametrisation normal prior leads to an even more peaked prior…

“Even when Jeffreys prior can be computed, it will be difficult to sell this prior as an objective prior to the jurors or the senators on the committee. The construction of Jeffreys and other objective priors for multi-parameter models poses substantial mathematical difficulties.”

I find it rather surprising that a paper can be dedicated to the comparison of two arbitrary prior distributions on two fairly simplistic models towards the global conclusion that “non-informative priors neither ‘let the data speak’ nor do they correspond (even roughly) to likelihood analysis.” In this regard, the earlier critical analysis of Seaman et al., to which my PhD student Kaniav Kamary and I replied, had a broader scope.

5 Responses to “Is non-informative Bayesian analysis dangerous for wildlife???”

  1. The naivety started here – R.A. Fisher, objected to the use of flat priors because of their lack of invariance under transformation – as there can only be finitely different parameters that generate wildlife in this universe and any convenient continuous prior is only an approximation. Therefore the prior and posterior probabilities we actually need/want to worry about are invariant to any one to one transformation. David Draper, in agreeing with this, did though worry about the spacing between the parameter values (likely due to effect of this on losses and decisions?)

    • Keith,

      Whenever Frequentists talk about Bayes, you learn a great deal about Frequentism, but nothing about Bayes. In the long sad sorry history of confusion and ineptitude that is classical statistics, Fisher’s objection to uninformative priors is the most gut wrenching stupid. A simple example illustrates the point.

      Suppose we have a space X ={0,1,2,…, 1000000} and suppose we are ignorant as to the true value x* in this space. So we put the uniform distribution on X.

      Now image there is another space F={0,1} and it’s related to the first through a transformation function f(x) defined as:

      f(0)=0
      f(i) = 1 for i=1,2,…,1000000

      Now just because we’re ignorant about x* in X doesn’t mean we’re ignorant about f*=f(x*). In fact, odds strongly favor f*=1.

      Being ignorant about a space X doesn’t mean you’re ignorant about the transformed space F. It would be insane to require any “ignorance” prior on X to also be an “ignorance” prior on F. It’s that simple folks.

      • Tee ent:

        Sorry, I should have explicitly said one to one invertable transformations.

      • The same thing applies, but it’s more subtle. Under the transfomration the probabilty mass gets squished and spread out versus what it was originally. I exaggerated the situation to make it obvious, but you definitly don’t want even one-to-one transformations preserving “uninformative” in general.

        Fisher’s claim that uninformative prior’s don’t exist is about like me saying ice doesn’t exist because it melts whenever I take it out of the freezer.

  2. I read this paper when it arrived and thought it was rubbish for all the reasons you suggest above (pro tip: If you want to make a comparison between two things, it behoves you to make sure the things are a) comparable and b) correspond to what you say they are!).

    My experience with animal ecology (both Bayeisan and Frequentist) is that in most cases you need to make an enormous number of assumptions to turn your data into something useful. Good statisticians and ecologists do this well. Bad ones do this poorly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s