## how can we tell someone “be natural”? [#2]

Posted in Books, Kids, pictures, University life with tags , , , , , , , , , on November 17, 2013 by xi'an

Following my earlier high school composition (or, as my daughter would stress, a first draft of vague ideas towards a composition!), I came upon an article in the Science leaflet of Le Monde (as of October 25) by the physicist Marco Zito (already commented on the ‘Og): “How natural is Nature?“. The following is my (commented) translation of the column, I cannot say I understand more than half of the words or hardly anything of its meaning, although checking some Wikipedia entries helped (I wonder how many readers have gotten to the end of this tribune)

The above question is related to physics in that (a) the electroweak interaction scale is about the mass of Higgs boson, at which scale [order of 100GeV] the electromagnetic and the weak forces are of the same intensity. And (b) there exists a gravitation scale, Planck’s mass, which is the energy [about 1.2209×1019GeV] where gravitation [general relativity] and quantum physics must be considered simultaneously. The difficulty is that this second fundamental scale differs from the first one, being larger by 17 orders of magnitude [so what?!]. The difference is puzzling, as a world with two fundamental scales that are so far apart does not sound natural [how does he define natural?]. The mass of Higgs boson depends on the other elementary particles and on the fluctuations of the related fields. Those fluctuations can be very large, of the same order as Planck’s scale. The sum of all those terms [which terms, dude?!] has no reason to be weak. In most possible universes, the mass of this boson should thus compare with Planck’s mass, hence a contradiction [uh?!].

And then enters this apparently massive probabilistic argument:

If you ask passerbys to select a number each between two large bounds, like – 10000 and 10000, it is very unlikely to obtain exactly zero as the sum of those numbers. So if you observe zero as the sum, you will consider the result is not «natural» [I'd rather say that the probabilistic model is wrong]. The physicists’ reasoning so far was «Nature cannot be unnatural. Thus the problem of the mass of Higgs’ boson must have a solution at energy scales that can be explored by CERN. We could then uncover a new and interesting  physics». Sadly, CERN has not (yet?) discovered new particles or new interactions. There is therefore no «natural» solution. Some of us imagine an unknown symmetry that bounds the mass of Higgs’ boson.

And a conclusion that could work for a high school philosophy homework:

This debate is typical of how science proceeds forward. Current theories are used to predict beyond what has been explored so fat. This extrapolation works for a little while, but some facts eventually come to invalidate them [sounds like philosophy of science 101, no?!]. Hence the importance to validate through experience our theories to abstain from attributing to Nature discourses that only reflect our own prejudices.

This Le Monde Science leaflet also had a short entry on a meteorite called Hypatia, because it was found in Egypt, home to the Alexandria 4th century mathematician Hypatia. And a book review of (the French translation of) Perfect Rigor, a second-hand biography of Grigory Perelman by Martha Gessen. (Terrible cover by the way, don’t they know at Houghton Mifflin that the integral sign is an elongated S, for sum, and not an f?! We happened to discuss and deplore with Andrew the other day this ridiculous tendency to mix wrong math symbols and greek letters in the titles of general public math books. The title itself is not much better, what is imperfect rigor?!)  And the Le Monde math puzzle #838

## Confronting intractability in Bristol

Posted in pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , on April 18, 2012 by xi'an

Here are the (revised) slides of my talk this afternoon at the Confronting Intractability in Statistical Inference workshop in Bristol, supported by SuSTain. The novelty is in the final part, where we managed to apply our result to a three population genetic escenario using one versus two δμ summary statistics. This should be the central new example in the incoming revision of our paper to Series B.

More generally, the meeting is very interesting, with great talks and highly relevant topics: e.g., yesterday, I finally understood what transportation models meant (at the general level) and how they related to copula modelling, saw a possible connection from computer models to ABC, got inspiration to mix Gaussian processes with simulation output, and listened to the whole exposition of Simon Wood’s alternative to ABC (much more informative than the four pages of his paper in Nature!). Despite (or due to?) sampling Bath ales yesterday night, I even woke up early enough this morning to run over and under the Clifton suspension bridge, with a slight drizzle that could not really be characterized as rain…

## not yet another musk ox in my garden!

Posted in Statistics, University life with tags , , , , , , , , on November 22, 2011 by xi'an

Here is a map of the estimated range of the musk ox at various epochs, hence meaning one could indeed have visited my garden about 30,000 years ago… This map is extracted from a very interesting paper by Lorenzen et al. that just appeared in Nature. The main theme of the paper is to determine whether or not human intervention had an impact on the total or partial extinction of species on Earth. Their conclusion is that this only seems to be the case for European horses and bisons. This came to my attention because of Scott Sisson’s tweet on this paper. Given that it involves ABC technology, in particular ABC model choice based on solely four summary statistics (nucleotide diversity, Tajima’s D, haplotypic diversity, and Fst) and posterior probabilities. I wonder if this fits our requirement for convergence.

## Testing and significance

Posted in R, Statistics, University life with tags , , , , , , , on September 13, 2011 by xi'an

Julien Cornebise pointed me to this Guardian article that itself summarises the findings of a Nature Neuroscience article I cannot access. The core of the paper is that a large portion of comparative studies conclude to a significant difference between protocols when one protocol result is significantly different from zero and the other one(s) is(are) not…  From a frequentist perspective (I am not even addressing the Bayesian aspects of using those tests!), under the null hypothesis that both protocols induce the same null effect, the probability of wrongly deriving a significant difference can be evaluated by

```> x=rnorm(10^6)
> y=rnorm(10^6)
> sum((abs(x)<1.96)*(abs(y)>1.96)*(abs(x-y)<1.96*sqrt(2)))
[1] 31805
> sum((abs(x)>1.96)*(abs(y)<1.96)*(abs(x-y)<1.96*sqrt(2)))
[1] 31875
> (31805+31875)/10^6
[1] 0.06368
```

which moves to a 26% probability of error when x is drifted by 2! (The maximum error is just above 30%, when x is drifted by around 2.6…)

(This post was written before Super Andrew posted his own “difference between significant and not significant“! My own of course does not add much to the debate.)

## Truly random [again]

Posted in Books, R, Statistics, University life with tags , , , , , , , , on December 10, 2010 by xi'an

“The measurement outputs contain at the 99% confidence level 42 new random bits. This is a much stronger statement than passing or not passing statistical tests, which merely indicate that no obvious non-random patterns are present.” arXiv:0911.3427

As often, I bought La Recherche in the station newsagent for the wrong reason! The cover of the December issue was about “God and Science” and I thought this issue would bring some interesting and deep arguments in connection with my math and realism post. The debate is very short, does not go in any depth. reproduces the Hawking’s quote that started the earlier post, and recycles the same graph about cosmology I used last summer in Vancouver! However, there are alternative interesting entries about probabilistic proof checking in Mathematics and truly random numbers… The first part is on an ACM paper on the PCP theorem by Irit Dinur, but is too terse as is (while the theory behind presumably escapes my abilities!). The second part is about a paper in Nature published by Pironio et al. and arXived as well. It is entitled “Random numbers certified by Bell’s Theorem” and also is one of the laureates of the La Recherche prize this year. I was first annoyed by the French coverage of the paper, mentioning that “a number was random with a probability of 99%” (?!) and that “a sequence of numbers is  perfectly random” (re-?!). The original paper is however stating the same thing, hence stressing the different meaning associated to randomness by those physicists, “the unpredictable character of the outcomes” and “universally-composable security”. The above “probability of randomness” is actually a p-value (associated with the null hypothesis that Bell’s inequality is not violated) that is equal to 0.00077. (So the above quote is somehow paradoxical!) The huge apparatus used to produce those random events is not very efficient: on average, 7 binary random numbers are detected per hour… A far cry from the “truly random” generator produced by Intel!

Ps-As a concidence, Julien Cornebise pointed out to me that there is a supplement in the journal about “Le Savoir du Corps” which is in fact handled by the pharmaceutical company Servier, currently under investigation for its drug Mediator… A very annoying breach of basic journalistic ethics in my opinion!

## Anathem

Posted in Books, University life with tags , , , , , , , , , , , on November 20, 2010 by xi'an

One colleague of mine in Dauphine gave me Anathem to read a few weeks ago. I had seen it in a bookstore once and planned to read it, so this was a perfect opportunity. I read through it slowly at first and then with more and more eagerness as the story built on, spending a fair chunk of the past evenings (and Metro rides) into finishing it. Anathem is a wonderful book, especially for mathematicians, and while it could still qualify as a science-fiction book, it blurs the frontiers between the genres of science-fiction, speculative fiction, documentary writings and epistemology… Just imagine any other sci’fi’ book being reviewed in Nature! Still, the book was awarded the 2009 Locus SF Award. So it has true sci’fi’ characteristics, including Clarke-ian bouts of space opera with a Rama-like vessel popping out of nowhere. But this is not the main feature that makes Anathem so unique and fascinating.

“The Adrakhonic theorem, which stated that the square of a right triangle hypotenuse was equal to the sum of the squares of the other two sides…” (p. 128)