Reading Significance is always an enjoyable moment, when I can find time to skim through the articles (before my wife gets hold of it!). This time, I lost my copy between my office and home, and borrowed it from Tom Nichols at Warwick with four mornings to read it during breakfast. This December issue is definitely interesting, as it contains several introduction articles on astro- and cosmo-statistics! One thing I had not noticed before is how a large fraction of the papers is written by authors of books, giving a quick entry or interview about their book. For instance, I found out that Roberto Trotta had written a general public book called the Edge of the Sky (All You Need to Know About the All-There-Is) which exposes the fundamentals of cosmology through the 1000 most common words in the English Language.. So Universe is replaced with All-There-Is! I can understand and to some extent applaud the intention, but it nonetheless makes for a painful read, judging from the excerpt, when researcher and telescope are not part of the accepted vocabulary. Reading the corresponding article in Significance let me a bit bemused at the reason provided for the existence of a multiverse, i.e., of multiple replicas of our universe, all with different conditions: multiplying the universes makes our more likely, while it sounds almost impossible on its own! This sounds like a very frequentist argument… and I am not even certain it would convince a frequentist. The other articles in this special astrostatistics section were of a more statistical nature, from estimating the number of galaxies to the chances of a big asteroid impact. Even though I found the graphical representation of the meteorite impacts in the past century because of the impact drawing in the background. However, when I checked the link to Carlo Zapponi’s website, I found the picture was a still of a neat animation of meteorites falling since the first report.
Archive for cosmology
In a theme connected with one argument in Dawkins’ The God Delusion, The New York Time just published a piece on the 20th anniversary of the debate between Carl Sagan and Ernst Mayr about the likelihood of the apparition of intelligent life. While 20 years ago, there was very little evidence if any of the existence of Earth-like planets, the current estimate is about 40 billions… The argument against the high likelihood of other inhabited planets is that the appearance of life on Earth is an accumulation of unlikely events. This is where the paper goes off-road and into the ditch, in my opinion, as it makes the comparison of the emergence of intelligent (at the level of human) life to be “as likely as if a Powerball winner kept buying tickets and — round after round — hit a bigger jackpot each time”. The later having a very clearly defined probability of occurring. Since “the chance of winning the grand prize is about one in 175 million”. The paper does not tell where the assessment of this probability can be found for the emergence of human life and I very much doubt it can be justified. Given the myriad of different species found throughout the history of evolution on Earth, some of which evolved and many more which vanished, I indeed find it hard to believe that evolution towards higher intelligence is the result of a basically zero probability event. As to conceive that similar levels of intelligence do exist on other planets, it also seems more likely than not that life took on average the same span to appear and to evolve and thus that other inhabited planets are equally missing means to communicate across galaxies. Or that the signals they managed to send earlier than us have yet to reach us. Or Earth a long time after the last form of intelligent life will have vanished…
While I thought the series run by The Stone on the philosophy [or lack thereof] of religions was over, it seems there are more entries. This week, I read with great pleasure the piece written by Tim Maudlin on the role played by recent results in (scientific) cosmology in refuting theist arguments.
“No one looking at the vast extent of the universe and the completely random location of homo sapiens within it (in both space and time) could seriously maintain that the whole thing was intentionally created for us.” T. Maudlin
What I particularly liked in his arguments is the role played by randomness, with an accumulation of evidence of the random nature and location of Earth and human beings, which and who appear more and more at the margins of the Universe rather than the main reason for its existence. And his clear rejection of the argument of fine-tuned cosmological constants as an argument in favour of the existence of a watchmaker. (Argument that was also deconstructed in Seber’s book.) And obviously his final paragraph that “Atheism is the default position in any scientific inquiry”. This may be the strongest entry in the whole series.
I was still feeling poorly this morning with my brain in a kind of flu-induced haze so could not concentrate for a whole talk, which is a shame as I missed most of the contents of the astrostatistics session put together by David van Dyk… Especially the talk by Roberto Trotta I was definitely looking for. And the defence of nested sampling strategies for marginal likelihood approximations. Even though I spotted posterior distributions for WMAP and Plank data on the ΛCDM that reminded me of our own work in this area… Apologies thus to all speakers for dozing in and out, it was certainly not due to a lack of interest!
Sebastian Seehars mentioned emcee (for ensemble Monte Carlo), with a corresponding software nicknamed “the MCMC hammer”, and their own CosmoHammer software. I read the paper by Goodman and Ware (2010) this afternoon during the ski break (if not on a ski lift!). Actually, I do not understand why an MCMC should be affine invariant: a good adaptive MCMC sampler should anyway catch up the right scale of the target distribution. Other than that, the ensemble sampler reminds me very much of the pinball sampler we developed with Kerrie Mengersen (1995 Valencia meeting), where the target is the product of L targets,
and a Gibbs-like sampler can be constructed, moving one component (with index k, say) of the L-sample at a time. (Just as in the pinball sampler.) Rather than avoiding all other components (as in the pinball sampler), Goodman and Ware draw a single other component at random (with index j, say) and make a proposal away from it:
where ζ is a scale random variable with (log-) symmetry around 1. The authors claim improvement over a single track Metropolis algorithm, but it of course depends on the type of Metropolis algorithms that is chosen… Overall, I think the criticism of the pinball sampler also applies here: using a product of targets can only slow down the convergence. Further, the affine structure of the target support is not a given. Highly constrained settings should not cope well with linear transforms and non-linear reparameterisations would be more efficient….
I missed this astrostatistics conference announcement (and the conference itself, obviously!), occurring next door… Actually, I would have had (wee) trouble getting there as I was (and am) mostly stuck at home with a bruised knee and a doctor ban on any exercise in the coming day, thanks to a bike fall last Monday! (One of my 1991 bike pedals broke as I was climbing a steep slope and I did not react fast enough… Just at the right time to ruin my training preparation of the Argentan half-marathon. Again.) Too bad because there was a lot of talks that were of interest to me!
In the weekend edition of Le Monde I bought when getting out of my plane back from Osaka, and ISBA 2012!, the science leaflet has a (weekly) tribune by a physicist called Marco Zito that discussed this time of the differences between frequentist and Bayesian confidence intervals. While it is nice to see this opposition debated in a general audience daily like Le Monde, I am not sure the tribune will bring enough light to help to the newcomer to reach an opinion about the difference! (The previous tribune considering Bayesian statistics was certainly more to my taste!)
Since I cannot find a link to the paper, let me sum up: the core of the tribune is to wonder what does 90% in 90% confidence interval mean? The Bayesian version sounds ridiculous since “there is a single true value of [the parameter] M and it is either in the interval or not” [my translation]. The physicist then goes into stating that the probability is in fact “subjective. It measures the degree of conviction of the scientists, given the data, for M to be in the interval. If those scientists were aware of another measure, they would use another interval” [my translation]. Darn… so many misrepresentations in so few words! First, as a Bayesian, I most often consider there is a true value for the parameter associated with a dataset but I still use a prior and a posterior that are not point masses, without being incoherent, simply because the posterior only summarizes what I know about the parameter, but is obviously not a property of the true parameter. Second, the fact that the interval changes with the measure has nothing to do with being Bayesians. A frequentist would also change her/his interval with other measures…Third, the Bayesian “confidence” interval is but a tiny (and reductive) part of the inference one can draw from the posterior distribution.
From this delicate start, things do not improve in the tribune: the frequentist approach is objective and not contested by Marco Zito, as it sounds eminently logical. Kant is associated with Bayes and Platon with the frequentist approach, “religious wars” are mentioned about both perspectives debating endlessly about the validity of their interpretation (is this truly the case? In the few cosmology papers I modestly contributed to, referees’ reports never objected to the Bayesian approach…) The conclusion makes one wonders what is the overall point of this tribune: superficial philosophy (“the debate keeps going on and this makes sense since it deals with the very nature of research: can we know and speak of the world per se or is it forever hidden to us? (…) This is why doubt and even distrust apply about every scientific result and also in other settings.”) or criticism of statistics (“science (or art) of interpreting results from an experiment”)? (And to preamp a foreseeable question: no, I am not writing to the journal this time!)
“Space,” it says, “is big. Really big. You just won’t believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space, listen…” The Hitchhiker’s Guide to the Galaxy, Douglas Adams
“There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.” The Hitchhiker’s Guide to the Galaxy, Douglas Adams
Following a link on Science Daily when looking at this 64 kcal mystery, I found an interesting annoucement about the most complete simulation of the evolution of the Universe from the Big Bang till now. The cosmology research unit in charge of the project is furthermore called DEUS (for Dark Energy Universe Simulation!), mostly located at Université Paris-Diderot, and its “goal is to investigate the imprints of dark energy on cosmic structure formation through high-performance numerical simulations”. It just announced the “simulation of the full observable universe for the concordance ΛCDM model”, which allows for the comparison of several cosmological models. (Data is freely available.) Besides the sheer scientific appeal of the project, the simulation side is also fascinating, although quite remote from Monte Carlo principles, in that the approach relies on very few repetitions of the simulation. The statistics are based on a single simulation, for a completely observed (simulated) Universe.
“If life is going to exist in a Universe of this size, then the one thing it cannot afford to have is a sense of proportion…” The Hitchhiker’s Guide to the Galaxy, Douglas Adams
The amounts involved in this simulation are simply mindboggling: 92 000 CPUs, 150 PBytes of data, 2 (U.S.) quadrillion flops (2 PFlop/s), the equivalent of 30 million computing hours, each particle has the size of the Milky Way, and so on… Here is a videoed description of the project (make sure to turn the sounds off if, like me, you simply and definitely hate Strauss’ music, and even if you like it, since the pictures do not move at the same pace as the music!):