Archive for Banff

Rundlestone Session

Posted in Mountains, pictures, Travel, Wines with tags , , , , , , , , , on May 20, 2017 by xi'an

Pu’erh stuffed tangerine

Posted in Statistics with tags , , , , on April 29, 2017 by xi'an

The Hanging Tree

Posted in Books, Kids, Travel with tags , , , , , , , on March 25, 2017 by xi'an

This is the fifth sixth volume of Ben Aaronovitch’s Rivers of London series. Which features PC Peter Grant from the London’s Metropolitan Police specialising in paranormal crime. Joining a line of magicians that was started by Isaac Newton. And with the help of water deities. Although this English magic sleuthing series does not compare with the superlative Jonathan Strange & Mr. Norrell single book, The Hanging Tree remains highly enjoyable, maybe more for its style and vocabulary than for the detective story itself, which does not sound completely coherent (unless I read it too quickly during the wee hours in Banff last week). And does not bring much about this part of London. Still a pleasure to read as the long term pattern of Aaronovitch’s universe slowly unravels and some characters get more substance and depth.

Jonathan Strange & Mr Norrell [BBC One]

Posted in Books, pictures, Travel with tags , , , , , , , on March 18, 2017 by xi'an

After discussing Jonathan Strange & Mr Norrell with David Frazier in Banff, where I spotted him reading this fabulous book, I went for a look at the series BBC One made out of this great novel. And got so hooked to it that I binge-watched the whole series of 7 episodes over three days..! I am utterly impressed at the BBC investing so much into this show, rendering most of the spirit of the book and not only the magical theatrics. The complex [and nasty] personality of Mr Norrell and his petit-bourgeois quest of respectability is beautifully exposed, leading him to lie and steal and come close to murder [directly or by proxy], in a pre-Victorian and anti-Romantic urge to get away from magical things from the past, “more than 300 years ago”. While Jonathan Strange’s own Romantic inclinations are obvious, including the compulsory  travel to Venezia [even though the BBC could only afford Croatia, it seems!] The series actually made clear some points I had missed in the novel, presumably by rushing through it, like the substitution of Strange’s wife by the moss-oak doppelganger created by the fairy king. The enslavement of Stephen,  servant of Lord Pole and once and future king by the same fairy is also superbly rendered.

While not everything in the series is perfect, with in particular the large scale outdoor scenes being too close to a video-game rendering (as in the battle of Waterloo that boils down to a backyard brawl!), the overall quality of the show [the Frenchmen there parlent vraiment français, with no accent!] and adhesion to the spirit of Susanna Clarke’s novel make it an example of the tradition of excellence of the BBC. (I just wonder at the perspective of a newcomer who would watch the series with no prior exposure to the book!)

CORE talk at Louvain-la-Neuve

Posted in Statistics with tags , , , , , , , on March 16, 2017 by xi'an

Tomorrow, I will give a talk at the seminar for econometrics and finance of CORE, in Louvain-la-Neuve, Belgium. Here are my slides, recycled from several earlier talks and from Judith’s slides in Banff:

 

Mnt Rundle [jatp]

Posted in Statistics with tags , , , , , , , , , on March 3, 2017 by xi'an

machine learning-based approach to likelihood-free inference

Posted in Statistics with tags , , , , , , , , , , , on March 3, 2017 by xi'an

polyptych painting within the TransCanada Pipeline Pavilion, Banff Centre, Banff, March 21, 2012At ABC’ory last week, Kyle Cranmer gave an extended talk on estimating the likelihood ratio by classification tools. Connected with a 2015 arXival. The idea is that the likelihood ratio is invariant by a transform s(.) that is monotonic with the likelihood ratio itself. It took me a few minutes (after the talk) to understand what this meant. Because it is a transform that actually depends on the parameter values in the denominator and the numerator of the ratio. For instance the ratio itself is a proper transform in the sense that the likelihood ratio based on the distribution of the likelihood ratio under both parameter values is the same as the original likelihood ratio. Or the (naïve Bayes) probability version of the likelihood ratio. Which reminds me of the invariance in Fearnhead and Prangle (2012) of the Bayes estimate given x and of the Bayes estimate given the Bayes estimate. I also feel there is a connection with Geyer’s logistic regression estimate of normalising constants mentioned several times on the ‘Og. (The paper mentions in the conclusion the connection with this problem.)

Now, back to the paper (which I read the night after the talk to get a global perspective on the approach), the ratio is of course unknown and the implementation therein is to estimate it by a classification method. Estimating thus the probability for a given x to be from one versus the other distribution. Once this estimate is produced, its distributions under both values of the parameter can be estimated by density estimation, hence an estimated likelihood ratio be produced. With better prospects since this is a one-dimensional quantity. An objection to this derivation is that it intrinsically depends on the pair of parameters θ¹ and θ² used therein. Changing to another pair requires a new ratio, new simulations, and new density estimations. When moving to a continuous collection of parameter values, in a classical setting, the likelihood ratio involves two maxima, which can be formally represented in (3.3) as a maximum over a likelihood ratio based on the estimated densities of likelihood ratios, except that each evaluation of this ratio seems to require another simulation. (Which makes the comparison with ABC more complex than presented in the paper [p.18], since ABC major computational hurdle lies in the production of the reference table and to a lesser degree of the local regression, both items that can be recycled for any new dataset.) A smoothing step is then to include the pair of parameters θ¹ and θ² as further inputs of the classifier.  There still remains the computational burden of simulating enough values of s(x) towards estimating its density for every new value of θ¹ and θ². And while the projection from x to s(x) does effectively reduce the dimension of the problem to one, the method still aims at estimating with some degree of precision the density of x, so cannot escape the curse of dimensionality. The sleight of hand resides in the classification step, since it is equivalent to estimating the likelihood ratio. I thus fail to understand how and why a poor classifier can then lead to a good approximations of the likelihood ratio “obtained by calibrating s(x)” (p.16). Where calibrating means estimating the density.