Evidence and evolution (2)

“When dealing with natural things we will, then, never derive any explanations from the purpose which God or nature may have had in view when creating them and we shall entirely banish from our philosophy the search for final causes. For we should not be so arrogant as to suppose that we can share in God’s plans.” René Descartes, Les Principes de la Philosophie, Livre I, 28

I have now read the second chapter of the book Evidence and Evolution: The Logic Behind the Science by Elliott Sober. The very chapter which title is “Intelligent design”… As posted earlier, I was loath to get into this chapter for fear of being dragged into a nonsensical debate. In fact, the chapter is written from a purely philosophical/logical perspective, while I was looking for statistical arguments given the tenor of the first chapter (reviewing the differences between Bayesians, likelihoodists (sic!), and frequentists). There is therefore very little I can contribute to the debate, being no philosopher of science. I find the introduction of the chapter interesting in that it relates the creationism /”intelligent design” thesis to a long philosophical tradition (witness the above quote from Descartes) rather than to the current political debate about “teaching” creationism in US and UK schools. The disputation of older theses like Paley’s watch is however taking most of the chapter which is disappointing in my humble opinion. In a sense, Sober mostly states the obvious when arguing that when gods or other supernatural beings enter the picture, they can explain for any observed fact with the highest likelihood while being unable to predict any fact not yet observed. I would have prefered to see hard scientific facts and the use of statistical evidence, even of the AIC sort! The call to Popper’s testability does not bring further arguments because Sober also defends the thesis that even the theory of “intelligent” design is falsifiable… In Section 2.19 about model selection, the comparison between a single parameter model and a one million parameter model hints at Ockham’s razor, but Sober misses the point about a  major aspect of Bayesian analysis, which is that by the virtue of hyperpriors and hyperparameters, observations about one group of parameters also brings information about another group of parameters when those are related via a hyperprior (as in small area estimation). Given that the author never discusses the use of priors over the model parameters and uses instead pluggin estimates, he does not take advantage of the marginal posterior dependence between the different groups of parameters.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.