I am off to New York City for two days, giving a seminar at Columbia tomorrow and visiting Andrew Gelman there. My talk will be about testing as mixture estimation, with slides similar to the Nice ones below if slightly upgraded and augmented during the flight to JFK. Looking at the past seminar speakers, I noticed we were three speakers from Paris in the last fortnight, with Ismael Castillo and Paul Doukhan (in the Applied Probability seminar) preceding me. Is there a significant bias there?!
Archive for New York city
“I’m not sure how we found Domaine de Mortiès, an organic winery at the foothills of Pic St. Loup, but it was the kind of unplanned, delightful discovery our previous trips to Montpellier never allowed.”
Last year, I had the opportunity to visit and sample (!) from Domaine de Mortiès, an organic Pic Saint-Loup vineyard and winemaker. I have not yet opened the bottle of Jamais Content I bought then. Today I spotted in The New York Times a travel article on A visit to the in-laws in Montpellier that takes the author to Domaine de Mortiès, Pic Saint-Loup, Saint-Guilhem-du-Désert and other nice places, away from the overcrowded centre of town and the rather bland beach-town of Carnon, where she usually stays when visiting. And where we almost finished our Bayesian Essentials with R! To quote from the article, “Montpellier, France’s eighth-largest city, is blessed with a Mediterranean sun and a beautiful, walkable historic centre, a tourist destination in its own right, but because it is my husband’s home city, a trip there never felt like a vacation to me.” And when the author mentions the owner of Domaine de Mortiès, she states that “Mme. Moustiés looked about as enthused as a teenager working the checkout at Rite Aid”, which is not how I remember her from last year. Anyway, it is fun to see that visitors from New York City can unexpectedly come upon this excellent vineyard!
I received very recently this book, Guesstimation 2.0, written by Lawrence Weinstein from Princeton University Press for review in CHANCE and decided to check the first (2008 )volume, Guesstimation, co-written by Lawrence Weinstein and John A. Adam. (Discovering in the process that they both had a daughter named Rachel, like my daughter!)
The title may be deemed to be very misleading for (unsuspecting) statisticians as, on the one hand, the book does not deal at all with estimation in our sense but with approximation to the right order of magnitude of an unknown quantity. It is thus closer to Innumeracy than to Statistics for Dummies, in that it tries to induce people to take the extra step of evaluating, even roughly, numerical amounts (rather than shying away from it or, worse, of trusting the experts!). For instance, how much area could we cover with the pizza boxes Americans use every year? About the area of New York City. (On the other hand, because Guesstimation forces the reader to quantify one’s guesses about a certain quantity, it has a flavour of prior elicitation and thus this guesstimation could well pass for prior estimation!)
In about 80 questions, Lawrence Weinstein [with John A. Adam in Guesstimation] explains how to roughly “estimate”, i.e. guess, quantities that seem beyond a layman’s reach. Not all questions are interesting, in fact I would argue they are mostly uninteresting per se (e.g., what is the surface of toilet paper used in the U.S.A. over one year? how much could a 1km meteorite impacting the Earth change the length of the day? How many cosmic rays would have passed through a 30 million-year-old bacterium?), as well as very much centred on U.S. idiosyncrasies (i.e., money, food, cars, and cataclysms), and some clearly require more background in physics or mechanics than you could expect from the layman (e.g., the energy of the Sun or of a photon, P=mgh/t, L=mvr (angular momentum), neutrino enery depletion, microwave wavelength, etc. At least the book does not shy away from formulas!) So Guesstimation and Guesstimation 2.0 do not make for a good bedtime read or even for a pleasant linear read. Except between two metro stations. Or when flying to Des Moines next to a drunk woman… However, they provide a large source of diverse examples useful when you teach your kids about sizes and magnitudes (it took me years to convince Rachel that 1 cubic meter was the same as 1000 liters!, she now keeps a post-it over her desk with this equation!), your students about quick and dirty computing, or anyone about their ability to look critically at figures provided in the newsy, the local journal, or the global politician. Or when you suddenly wonder about the energy produced by a Sun made of… gerbils! (This is Problem 8.5 in Guesstimation and the answer is as mind-boggling as the question!) Continue reading
I bought this book in Princeton bookstore mostly because it was a such beautiful object! I had never heard of Nathan Larson nor of the Dewey Decimal System when I grabbed the book and felt the compulsion to buy it!
The book published by Akashic Books is indeed a beautiful book: the paper is high quality, a warm crème colour, the cover has inside flaps, the printing makes reading very enjoyable, the pages are cut in such a way that looking at the book from the fore edge makes it look like a Manhattan skyline… Truly a beautiful thing!!!
Once I had opened the book, I also got trapped by the story, an unusual style along with a great post-apocalyptic plot (not The Road, of course!, but what can compare with The Road?!) and a love of New York City that permeates the pages for sure! A magistral début for a new author. While the action takes place in an unpleasant future New York City, with disease and ruin on ever street corner, slowly recovering from a mega 9/11 style attack, the central character relates very much to Chandler‘s private detectives, but also, as mentioned in another review, to Jerome Charyn’s Isaac Seidel! The main character, only known as Dewey Decimal for his maniac idée fixe of ordering the books in the New York Library where he lives, is bordering on the insane and his moral code is rather heavily warped, witness several rather gratuitous murders in the book, but the whole city seems to have fallen very low in terms of this same moral code… As well as being under the rule of Eastern European thugs (to the point of the hero speaking Russian and Ukrainian). The blonde fatale found in every roman noir is slightly carituresque (“plastic surgery in any amount just makes me want to puke. Call me judgmental, but it indicates a certain set of accompanying goals, fashion choices and behaviors. It’s trashy and it means you don’t like yourself.“), with whiffs of ethnic cleansing activities in Serbia and she remains a mystery till the end of the novel. As are most other characters, in fact. This may be the low tide part of the book, that everything is perceived from Dewey’s eyes to the point of making others one-D and hard to fathom… But the overall scheme of following this partly insane detective throughout New York City makes the Dewey Decimal System quite an unconventional pleasure to read and I am looking forward the next story in the series.
At last, we have completed, arXived, and submitted our paper on the evaluation of summary statistics for Bayesian model choice! (I had presented preliminary versions at the recent workshops in New York and Zürich.) While broader in scope, the results obtained by Judith Rousseau, Jean-Michel Marin, Natesh Pillai, and myself bring an answer to the question raised by our PNAS paper on ABC model choice. Almost as soon as we realised the problem, that is, during MCMC’Ski in Utah, I talked with Judith about a possible classification of statistics in terms of their Bayes factor performances and we started working on that… While the idea of separating the mean behaviour of the statistics under both model came rather early, establishing a complete theoretical framework that validated this intuition took quite a while and the assumptions changed a few times around the summer. The simulations associated with the paper were straightforward in that (a) the setup had been suggested to us by a referee of our PNAS paper: compare normal and Laplace distributions with different summary statistics (inc. the median absolute deviation), (b) the theoretical results told us what to look for, and (c) they did very clearly exhibit the consistency and inconsistency of the Bayes factor/posterior probability predicted by the theory. Both boxplots shown here exhibit this agreement: when using (empirical) mean, median, and variance to compare normal and Laplace models, the posterior probabilities do not select the “true” model but instead aggregate near a fixed value. When using instead the median absolute deviation as summary statistic, the posterior probabilities concentrate near one or zero depending on whether or not the normal model is the true model.
The main result states that, under some “heavy-duty” assumptions, (a) if the “true” mean of the summary statistic can be recovered for both models under comparison, then the Bayes factor has the same asymptotic behaviour as n to the power -(d1 – d2)/2, irrespective of which one is the true model. (The dimensions d1 and d2 are the effective dimensions of the asymptotic means of the summary statistic under both models.) Therefore, the Bayes factor always asymptotically selects the model having the smallest effective dimension and cannot be consistent. (b) if, instead, the “true” mean of the summary statistic cannot be represented in the other model, then the Bayes factor is consistent. This means that, somehow, the best statistics to be used in an ABC approximation to a Bayes factor are ancillary statistics with different mean values under both models. Else, the summary statistic must have enough components to prohibit a parameter under the “wrong” model to meet the “true” mean of the summary statistic.
(As a striking coincidence, Hélene Massam and Géard Letac [re]posted today on arXiv a paper about the behaviour of the Bayes factor for contingency tables when the hyperparameter goes to zero, where they establish the consistency of the said Bayes factor under the sparser model. No Jeffreys-Lindley paradox in that case.)
It seems that every Sunday I run in Central Park, I am doomed to hit a race! This time it was not the NYC half-marathon (and I did not see Paula Radcliffe as she was in Berlin) but an 18 miles race in preparation for the NYC marathon. I had completed my fartlek training of 6x4mn and was recovering from a anaerobic last round when I saw some runners coming, so went with them as a recuperation jog for a mile or so. They had done the first 4 miles in 27’28”, which corresponds to a 4’16” pace per kilometer, so I must have missed the top runners. Actually, I think the first runners were at least 4 minutes faster, as they were coming when I left for the last 4mn. (But it was good for recovery!) Checking on the webpage of the race, the winner finished in 1:37’45”, which gives a marathon time of 2:21’40” unless I am confused.