Archive for Sherlock Holmes

journal of the [second] plague year [con’d]

Posted in Books, Kids, Mountains, pictures, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , on April 24, 2021 by xi'an

Read The Office of Gardens and Ponds (in French), by Didier Decoin [whom John l’Enfer I read more than forty years ago, with no lasting memories!], another random book found in the exchange section of our library!  While a pastiche of Japanese travel novels, the book is quite enjoyable and reminded me of our hike on the Kumano Kodō routes, two years ago. The tale takes place in 12th Century Japan and tells of the epic travel of a widow to the capital, Kyoto, carrying live carps for the gardens of the emperor. While some sections are somewhat encyclopedic on the culture of medieval Japan [and I thus wonder how Japanese readers have reacted to this pastiche], the scenario is rather subtle and the characters have depth, incl. the dead husband. The scene of the perfume competition is particularly well-imagined and worth reading on its own. I figure I will not bring the book back. (Warning: this book was voted a 2019 winner of the Bad Sex Award!). Also read Patti Smith’s Devotion, which was one of my Xmas presents. I had never read anything but Smith’s songs, since 1976 (!) with Horses, missing by little some of her concerts as on the week I was in Rimini… The book is quite light, and not only length-wise, made of two travel diaries in (to?) Paris and in (to?) Southern France, where she visits Camus’ house, and of a short story she writes on the train. While the diaries are mildly interesting, if a bit American-Tourist-in-Paris-cliché (like this insistence to find glamour in having breakfast at Café Flore!), the story comes as a disappointment, both for being unrealistic [in the negative sense] and for reproducing the old trope of the young orphan girl becoming the mistress of a much older man [to continue skating]. The connection with Estonia reminded me of Purge, by Sofi Oksanen, a powerful novel about the occupations of Estonia by Nazis and Soviet troups, an haunting novel of a different magnitude…

Made  soba noodles with the machine, resulting into shorte-than-life noodles, due to the high percentage of buckwheat flour in the dough, still quite enjoyable in a cold salad. Also cooked a roghan josh lamb shack, along with chapatis flavoured with radish leaves [no fire alarm this time] and a vegetable dahl whose recipe I found in Le Monde the same morn. Also took advantage of the few weeks with fresh and tender asparagus sold at the local market to make salads.

Watched a few episodes of Better than Us, Лучше (чем люди), a Russian science-fiction series set in a close future with humanoid robots replacing menial workers, until one rogue version turns uncontrollable, à la Blade Runner. There are appealing aspects to the story, besides the peep into a Russian series and the pleasure of listening to Russian, about the porous frontier between human and artificial intelligence. The scenario however quickly turns into a predictable loop and I eventually lost interest. Even faster did that happen with the Irregulars of Baker Street horror series, which I simply could not stand any further (and which connection with Holmes and Watson is most tenuous).

Having registered for a vaccination to the local pharmacy, I most got surprisingly called a few days later mid-afternoon to come at once for a shot of AstraZeneca, as they had a leftover dose. And a rising share of reluctant candidates for the vaccine!, despite David’s reassurances. I am unsure this shot was done early enough to get abroad for conferences or vacations in July, but it is one thing done anyway. With no side effect so far.

Holmes alone

Posted in Books, Kids, pictures with tags , , , , , , , , on November 29, 2020 by xi'an

From reading a rather positive review in The New York Times (if less in The Guardian, which states that it all rattles along amiably enough!), and my love of everything Holmes (if not as much as George Casella!), I watched Enola Holmes almost as soon as it came out on Netflix. While the film was overall pleasant, with great acting from the main actress Millie Bobby Brown, I found quite light and missing in scenario (mystery? sleuthing? forking paths?) and suspense. Which is not that surprising given that it is adapted from a young adult book. (Making me laugh at the PG-13 label!) And rather anachronistic in depicting the free-spirited Enola, roving Victorian London as a modern teenager, masquerading like her famous older brother, and mastering jiu jitsu. I was also disappointed in the low key appearance of  Helena Bonham Carter as an (obviously) unconventional mother and a bomb throwing suffragette… Nothing to compare with the superlative reworkings of Wells’ stories by Benedict Cumberbatch. As a side anecdote, I read a few days later that the Arthur Conan Doyle estate is suing the film makers for presenting an emotional Sherlock!

simulating a sum of Uniforms

Posted in Statistics with tags , , , , , , , , , , on May 1, 2020 by xi'an

When considering the distribution of the sum (or average) of N Uniform variates, called either Irwin-Hall for the sum or Bates for the average, simulating the N uniforms then adding them shows a linear cost in N. The density of the resulting variate is well-known,

f_X(x;N)=\dfrac{1}{2(N-1)!}\sum_{k=0}^N (-1)^k{N \choose k} (x-k)^{N-1}\text{sign}(x-k)

but similarly is of order N. Furthermore, controlling the terms in the alternating sum may prove delicate, as shown by the R function unifed::dirwin.hall() whose code

for (k in 0:floor(x)) ret1 <- ret1 + (-1)^k * choose(n, k) * 
    (x - k)^(n - 1)

quickly becomes unreliable (although I managed an easy fix by using logs and a reference value of the magnitude of the terms in the summation). There is however a quick solution provided by [of course!] Devroye (NURVG, Section XIV.3, p.708), using the fact that the characteristic function of the Irwin-Hall distribution [for Uniforms over (-1,1)] is quite straightforward

\Phi_N(t) = [\sin(t)/t]^N

which means the density can be bounded from above and results in an algorithm (NURVG, Section XIV.3, p.714) with complexity at most N to the power 5/8, if not clearly spelled out in the book. Obviously, it can be objected that for N large enough, like N=20, the difference between the true distribution and the CLT approximation is quite negligible (reminding me of my early simulating days where generating a Normal was done by averaging a dozen uniforms and properly rescaling!). But this is not an exact approach and the correction proves too costly. As shown by Section XIV.4 on the simulation of sums in NURVG. So… the game is afoot!

a computational approach to statistical learning [book review]

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , , , , , on April 15, 2020 by xi'an

This book was sent to me by CRC Press for review for CHANCE. I read it over a few mornings while [confined] at home and found it much more computational than statistical. In the sense that the authors go quite thoroughly into the construction of standard learning procedures, including home-made R codes that obviously help in understanding the nitty-gritty of these procedures, what they call try and tell, but that the statistical meaning and uncertainty of these procedures remain barely touched by the book. This is not uncommon to the machine-learning literature where prediction error on the testing data often appears to be the final goal but this is not so traditionally statistical. The authors introduce their work as (a computational?) supplementary to Elements of Statistical Learning, although I would find it hard to either squeeze both books into one semester or dedicate two semesters on the topic, especially at the undergraduate level.

Each chapter includes an extended analysis of a specific dataset and this is an asset of the book. If sometimes over-reaching in selling the predictive power of the procedures. Printed extensive R scripts may prove tiresome in the long run, at least to me, but this may simply be a generational gap! And the learning models are mostly unidimensional, see eg the chapter on linear smoothers with imho a profusion of methods. (Could someone please explain the point of Figure 4.9 to me?) The chapter on neural networks has a fairly intuitive introduction that should reach fresh readers. Although meeting the handwritten digit data made me shift back to the late 1980’s, when my wife was working on automatic character recognition. But I found the visualisation of the learning weights for character classification hinting at their shape (p.254) most alluring!

Among the things I am missing when reading through this book, a life-line on the meaning of a statistical model beyond prediction, attention to misspecification, uncertainty and variability, especially when reaching outside the range of the learning data, and further especially when returning regression outputs with significance stars, discussions on the assessment tools like the distance used in the objective function (for instance lacking in scale invariance when adding errors on the regression coefficients) or the unprincipled multiplication of calibration parameters, some asymptotics, at least one remark on the information loss due to splitting the data into chunks, giving some (asymptotic) substance when using “consistent”, waiting for a single page 319 to see the “data quality issues” being mentioned. While the methodology is defended by algebraic and calculus arguments, there is very little on the probability side, which explains why the authors consider that the students need “be familiar  with the concepts of expectation, bias and variance”. And only that. A few paragraphs on the Bayesian approach are doing more harm than well, especially with so little background in probability and statistics.

The book possibly contains the most unusual introduction to the linear model I can remember reading: Coefficients as derivatives… Followed by a very detailed coverage of matrix inversion and singular value decomposition. (Would not sound like the #1 priority were I to give such a course.)

The inevitable typo “the the” was found on page 37! A less common typo was Jensen’s inequality spelled as “Jenson’s inequality”. Both in the text (p.157) and in the index, followed by a repetition of the same formula in (6.8) and (6.9). A “stwart” (p.179) that made me search a while for this unknown verb. Another typo in the Nadaraya-Watson kernel regression, when the bandwidth h suddenly turns into n (and I had to check twice because of my poor eyesight!). An unusual use of partition where the sets in the partition are called partitions themselves. Similarly, fluctuating use of dots for products in dimension one, including a form of ⊗ for matricial product (in equation (8.25)) followed next page by the notation for the Hadamard product. I also suspect the matrix K in (8.68) is missing 1’s or am missing the point, since K is the number of kernels on the next page, just after a picture of the Eiffel Tower…) A surprising number of references for an undergraduate textbook, with authors sometimes cited with full name and sometimes cited with last name. And technical reports that do not belong to this level of books. Let me add the pedant remark that Conan Doyle wrote more novels “that do not include his character Sherlock Holmes” than novels which do include Sherlock.

[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Books Review section in CHANCE.]

comments on Watson and Holmes

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , on April 1, 2016 by xi'an


“The world is full of obvious things which nobody by any chance ever observes.” The Hound of the Baskervilles

In connection with the incoming publication of James Watson’s and Chris Holmes’ Approximating models and robust decisions in Statistical Science, Judith Rousseau and I wrote a discussion on the paper that has been arXived yesterday.

“Overall, we consider that the calibration of the Kullback-Leibler divergence remains an open problem.” (p.18)

While the paper connects with earlier ones by Chris and coauthors, and possibly despite the overall critical tone of the comments!, I really appreciate the renewed interest in robustness advocated in this paper. I was going to write Bayesian robustness but to differ from the perspective adopted in the 90’s where robustness was mostly about the prior, I would say this is rather a Bayesian approach to model robustness from a decisional perspective. With definitive innovations like considering the impact of posterior uncertainty over the decision space, uncertainty being defined e.g. in terms of Kullback-Leibler neighbourhoods. Or with a Dirichlet process distribution on the posterior. This may step out of the standard Bayesian approach but it remains of definite interest! (And note that this discussion of ours [reluctantly!] refrained from capitalising on the names of the authors to build easy puns linked with the most Bayesian of all detectives!)

%d bloggers like this: