Fisher’s claim that uninformative prior’s don’t exist is about like me saying ice doesn’t exist because it melts whenever I take it out of the freezer.

]]>Sorry, I should have explicitly said one to one invertable transformations.

]]>Whenever Frequentists talk about Bayes, you learn a great deal about Frequentism, but nothing about Bayes. In the long sad sorry history of confusion and ineptitude that is classical statistics, Fisher’s objection to uninformative priors is the most gut wrenching stupid. A simple example illustrates the point.

Suppose we have a space X ={0,1,2,…, 1000000} and suppose we are ignorant as to the true value x* in this space. So we put the uniform distribution on X.

Now image there is another space F={0,1} and it’s related to the first through a transformation function f(x) defined as:

f(0)=0

f(i) = 1 for i=1,2,…,1000000

Now just because we’re ignorant about x* in X doesn’t mean we’re ignorant about f*=f(x*). In fact, odds strongly favor f*=1.

Being ignorant about a space X doesn’t mean you’re ignorant about the transformed space F. It would be insane to require any “ignorance” prior on X to also be an “ignorance” prior on F. It’s that simple folks.

]]>My experience with animal ecology (both Bayeisan and Frequentist) is that in most cases you need to make an enormous number of assumptions to turn your data into something useful. Good statisticians and ecologists do this well. Bad ones do this poorly.

]]>