Through the blog of Andrew Jaffe, Leaves on the Lines, I became aware of John Gray‘s tribune in The Guardian, “What scares the new atheists“. Gray’s central points against “campaigning” or “evangelical” atheists are that their claim to scientific backup is baseless, that they mostly express a fear about the diminishing influence of the liberal West, and that they cannot produce an alternative form of morality. The title already put me off and the beginning of the tribune just got worse, as it goes on and on about the eugenics tendencies of some 1930’s atheists and on how they influenced Nazi ideology. It is never a good sign in a debate when the speaker strives to link the opposite side with National Socialist ideas and deeds. Even less so in a supposedly philosophical tribune! (To add injury to insult, Gray also brings Karl Marx in the picture with a similar blame for ethnocentrism…) Continue reading
Archive for London
The University of Warwick is one of the five UK Universities (Cambridge, Edinburgh, Oxford, Warwick and UCL) to be part of the new Alan Turing Institute.To quote from the University press release, “The Institute will build on the UK’s existing academic strengths and help position the country as a world leader in the analysis and application of big data and algorithm research. Its headquarters will be based at the British Library at the centre of London’s Knowledge Quarter.” The Institute will gather researchers from mathematics, statistics, computer sciences, and connected fields towards collegial and focussed research , which means in particular that it will hire a fairly large number of researchers in stats and machine-learning in the coming months. The Department of Statistics at Warwick was strongly involved in answering the call for the Institute and my friend and colleague Mark Girolami will the University leading figure at the Institute, alas meaning that we will meet even less frequently! Note that the call for the Chair of the Alan Turing Institute is now open, with deadline on March 15. [As a personal aside, I find the recognition that
Alan Turing’s genius played a pivotal role in cracking the codes that helped us win the Second World War. It is therefore only right that our country’s top universities are chosen to lead this new institute named in his honour. by the Business Secretary does not absolve the legal system that drove Turing to suicide….]
Even though this is the fourth volume in the Peter Grant series, I did read it first [due to my leaving volume one in my office in Coventry and coming across this one in an airport bookstore in Düsseldorf], an experiment I do not advise anyone to repeat as it kills some of the magic in Rivers of London [renamed Midnight Riots on the US market, for an incomprehensible reason!, with the series being recalled Rivers of London, but at least they left the genuine and perfect covers…, not like some of the other foreign editions!] and makes reading Broken homes an exercise in guessing. [Note for ‘Og’s readers suffering from Peter Grant fatigue: the next instalment, taking the seemingly compulsory trip Outside!—witness the Bartholomew series—, is waiting for me in Warwick, so I will not read it before the end of January!]
“I nodded sagely. `You’re right,’ I said. `We need a control.’
`Otherwise, how do you know the variable you’ve changed is the one having the effect?’ I said.”
Now, despite this inauspicious entry, I did enjoy Broken homes as much [almost!] as the other volumes in the series. It mostly takes place in a less familiar [for a French tourist like me] part of London, but remains nonetheless true to its spirit of depicting London as a living organism! There are mostly characters from the earlier novels, but the core of the story is an infamous housing estate built by a mad architect in Elephant and Castle, not that far from Waterloo [Station], but sounding almost like a suburb from Aaronovitch’s depiction! Actually, the author has added a google map for the novel locations on his blog, wish I had it at the time [kind of difficult to get in a plane!].
“Search as I might, nobody else was offering free [wifi] connections to the good people of Elephant and Castle.”
The plot itself is centred on this estate [not really a spoiler, is it?] and the end is outstanding in that it is nothing like one would expect. With or without reading the other volumes. I still had trouble understanding the grand scheme of the main villain, while I have now entirely forgotten about the reasons for the crime scene at the very beginning of Broken homes. Rereading the pages where the driver, Robert Weil, appears did not help. What was his part in the story?! Despite this [maybe entirely personal] gap, the story holds well together, somewhat cemented by the characters populating the estate, who are endowed with enough depth to make them truly part of the story, even when they last only a few pages [spoiler!]. And as usual style and grammar and humour are at their best!
As posted a few days ago, Mathieu Gerber and Nicolas Chopin will read this afternoon a Paper to the Royal Statistical Society on their sequential quasi-Monte Carlo sampling paper. Here are some comments on the paper that are preliminaries to my written discussion (to be sent before the slightly awkward deadline of Jan 2, 2015).
Quasi-Monte Carlo methods are definitely not popular within the (mainstream) statistical community, despite regular attempts by respected researchers like Art Owen and Pierre L’Écuyer to induce more use of those methods. It is thus to be hoped that the current attempt will be more successful, it being Read to the Royal Statistical Society being a major step towards a wide diffusion. I am looking forward to the collection of discussions that will result from the incoming afternoon (and bemoan once again having to miss it!).
“It is also the resampling step that makes the introduction of QMC into SMC sampling non-trivial.” (p.3)
At a mathematical level, the fact that randomised low discrepancy sequences produce both unbiased estimators and error rates of order
means that randomised quasi-Monte Carlo methods should always be used, instead of regular Monte Carlo methods! So why is it not always used?! The difficulty stands [I think] in expressing the Monte Carlo estimators in terms of a deterministic function of a fixed number of uniforms (and possibly of past simulated values). At least this is why I never attempted at crossing the Rubicon into the quasi-Monte Carlo realm… And maybe also why the step had to appear in connection with particle filters, which can be seen as dynamic importance sampling methods and hence enjoy a local iid-ness that relates better to quasi-Monte Carlo integrators than single-chain MCMC algorithms. For instance, each resampling step in a particle filter consists in a repeated multinomial generation, hence should have been turned into quasi-Monte Carlo ages ago. (However, rather than the basic solution drafted in Table 2, lower variance solutions like systematic and residual sampling have been proposed in the particle literature and I wonder if any of these is a special form of quasi-Monte Carlo.) In the present setting, the authors move further and apply quasi-Monte Carlo to the particles themselves. However, they still assume the deterministic transform
which the q-block on which I stumbled each time I contemplated quasi-Monte Carlo… So the fundamental difficulty with the whole proposal is that the generation from the Markov proposal
has to be of the above form. Is the strength of this assumption discussed anywhere in the paper? All baseline distributions there are normal. And in the case it does not easily apply, what would the gain bw in only using the second step (i.e., quasi-Monte Carlo-ing the multinomial simulation from the empirical cdf)? In a sequential setting with unknown parameters θ, the transform is modified each time θ is modified and I wonder at the impact on computing cost if the inverse cdf is not available analytically. And I presume simulating the θ’s cannot benefit from quasi-Monte Carlo improvements.
The paper obviously cannot get into every detail, obviously, but I would also welcome indications on the cost of deriving the Hilbert curve, in particular in connection with the dimension d as it has to separate all of the N particles, and on the stopping rule on m that means only Hm is used.
Another question stands with the multiplicity of low discrepancy sequences and their impact on the overall convergence. If Art Owen’s (1997) nested scrambling leads to the best rate, as implied by Theorem 7, why should we ever consider another choice?
In connection with Lemma 1 and the sequential quasi-Monte Carlo approximation of the evidence, I wonder at any possible Rao-Blackwellisation using all proposed moves rather than only those accepted. I mean, from a quasi-Monte Carlo viewpoint, is Rao-Blackwellisation easier and is it of any significant interest?
What are the computing costs and gains for forward and backward sampling? They are not discussed there. I also fail to understand the trick at the end of 4.2.1, using SQMC on a single vector instead of (t+1) of them. Again assuming inverse cdfs are available? Any connection with the Polson et al.’s particle learning literature?
Last questions: what is the (learning) effort for lazy me to move to SQMC? Any hope of stepping outside particle filtering?
“Dr. Walid said that normal human variations were wide enough that you’d need samples of hundreds of subjects to test that. Thousands if you wanted a statistically significant answer.
Low sample size—one of the reasons why magic and science are hard to reconcile.”
This is the third volume in the Rivers of London series, brought back from Gainesville, and possibly the least successful (in my opinion). It indeed takes place underground and not only in the Underground and the underground sewers of London. Which is this literary trick that always irks me in fantasy novels, namely the sudden appearance of massive underground complex with unsuspected societies that are large and evolved enough to reach the Industrial Age. (Sorry if this is too much of a spoiler!)
“It was the various probability calculations that stuffed me—they always do. I’d have been a bad scientist.”
Not that everything is bad in this novel: I still like the massive infodump about London, the style and humour, the return of PC Lesley trying to get over the (literal) loss of her face, and the appearance of new characters. But the story itself, revolving about a murder investigation, is rather shallow and the (compulsory?) English policeman versus American cop competition is too contrived to be funny. Most of the major plot is hidden from this volume, unless there are clues I missed. (For instance, one death from a previous volume which seemed to get ignored at that time is finally explained here.) Definitely not the book to read on its own, as it still relates and borrow much from the previous volumes, but presumably one to read nonetheless as the next instalment, Broken homes.
A book from the pile I brought back from Gainesville. And the first I read, mostly during the trip back to Paris. Both because I was eager to see the sequel to Rivers of London and because it was short and easy to carry in a pocket.
“From the figures I have, I believe that two to three jazz musicians have died within twenty-four hours of playing a gig in the Greater London area in the last year.”
“I take it that’s statistically significant?“
Moon over Soho is the second installment in the Peter Grant series by Ben Aaronovitch. It would not read well on its own as it takes over when Rivers of London stopped. Even though it reintroduces most of the rules of this magical universe. Most characters are back (except for the hostaged Beverly) and they are trying to cope with what happened in the first installment. The story is even more centred on jazz than in the first volume, with as a corollary, Peter Grant’s parents taking a more important part in the book. The recovering Leslie is hardly seen (for obvious reasons) and heard, which leaves a convenient hole in Grant’s sentimental life! The book also introduces a major magical villein who will undoubtedly figures in the incoming books. Another great story, even though the central plot has a highly predictable ending, and even more end of the ending, and some parts sound like repetitions of similar parts in the first volume. But the tone, the pace, the style, the humour, the luv’ of Lundun, all are there and so it is all that matters! (I again bemoan the missing map of London!)
On December 10, I will alas not travel to London to attend the Read Paper on sequential quasi-Monte Carlo presented by Mathieu Gerber and Nicolas Chopin to The Society, as I fly instead to Montréal for the NIPS workshops… I am quite sorry to miss this event, as this is a major paper which brings quasi-Monte Carlo methods into mainstream statistics. I will most certainly write a discussion and remind Og’s readers that contributed (800 words) discussions are welcome from everyone, the deadline for submission being January 02.