Archive for New York city

guesstimation (1+2)

Posted in Books, Statistics with tags , , , , , , , , , on November 9, 2012 by xi'an

I received very recently this book, Guesstimation 2.0, written by Lawrence Weinstein from Princeton University Press for review in CHANCE and decided to check the first (2008 )volume, Guesstimation, co-written by Lawrence Weinstein and John A. Adam. (Discovering in the process that they both had a daughter named Rachel, like my daughter!)

The title may be deemed to be very misleading for (unsuspecting) statisticians as, on the one hand, the book does not deal at all with estimation in our sense but with approximation to the right order of magnitude of an unknown quantity. It is thus closer to Innumeracy than to Statistics for Dummies, in that it tries to induce people to take the extra step of evaluating, even roughly, numerical amounts (rather than shying away from it or, worse, of trusting the experts!). For instance, how much area could we cover with the pizza boxes Americans use every year? About the area of New York City. (On the other hand, because Guesstimation forces the reader to quantify one’s guesses about a certain quantity, it has a flavour of prior elicitation and thus this guesstimation could well pass for prior estimation!)

In about 80 questions, Lawrence Weinstein [with John A. Adam in Guesstimation] explains how to roughly “estimate”, i.e. guess, quantities that seem beyond a layman’s reach. Not all questions are interesting, in fact I would argue they are mostly uninteresting per se (e.g., what is the surface of toilet paper used in the U.S.A. over one year? how much could a 1km meteorite impacting the Earth change the length of the day? How many cosmic rays would have passed through a 30 million-year-old bacterium?), as well as very much centred on U.S. idiosyncrasies (i.e., money, food, cars, and cataclysms), and some clearly require more background in physics or mechanics than you could expect from the layman (e.g., the energy of the Sun or of a photon, P=mgh/t, L=mvr (angular momentum), neutrino enery depletion, microwave wavelength, etc. At least the book does not shy away from formulas!) So Guesstimation and Guesstimation 2.0 do not make for a good bedtime read or even for a pleasant linear read. Except between two metro stations. Or when flying to Des Moines next to a drunk woman… However, they provide a large source of diverse examples useful when you teach your kids about sizes and magnitudes (it took me years to convince Rachel that 1 cubic meter was the same as 1000 liters!, she now keeps a post-it over her desk with this equation!), your students about quick and dirty computing, or anyone about their ability to look critically at figures provided in the newsy, the local journal, or the global politician. Or when you suddenly wonder about the energy produced by a Sun made of… gerbils! (This is Problem 8.5 in Guesstimation and the answer is as mind-boggling as the question!) Continue reading

the Dewey decimal system

Posted in Books, Travel with tags , , , , , , , on May 13, 2012 by xi'an

I bought this book in Princeton bookstore mostly because it was a such beautiful object! I had never heard of Nathan Larson nor of the Dewey Decimal System when I grabbed the book and felt the compulsion to buy it!

The book published by Akashic Books is indeed a beautiful book: the paper is high quality, a warm crème colour, the cover has inside flaps, the printing makes reading very enjoyable, the pages are cut in such a way that looking at the book from the fore edge makes it look like a Manhattan skyline… Truly a beautiful thing!!!

Once I had opened the book, I also got trapped by the story, an unusual style along with a great post-apocalyptic plot (not The Road, of course!, but what can compare with The Road?!) and a love of New York City that permeates the pages for sure! A magistral début for a new author. While the action takes place in an unpleasant future New York City, with disease and ruin on ever street corner, slowly recovering from a mega 9/11 style attack, the central character relates very much to Chandler‘s private detectives, but also, as mentioned in another review, to Jerome Charyn’s Isaac Seidel! The main character, only known as Dewey Decimal for his maniac idée fixe of ordering the books in the New York Library where he lives, is bordering on the insane and his moral code is rather heavily warped, witness several rather gratuitous murders in the book,  but the whole city seems to have fallen very low in terms of this same moral code… As well as being under the rule of Eastern European thugs (to the point of the hero speaking Russian and Ukrainian). The blonde fatale found in every roman noir is slightly carituresque (“plastic surgery in any amount just makes me want to puke. Call me judgmental, but it indicates a certain set of accompanying goals, fashion choices and behaviors. It’s trashy and it means you don’t like yourself.“), with whiffs of ethnic cleansing activities in Serbia and she remains a mystery till the end of the novel. As are most other characters, in fact. This may be the low tide part of the book, that everything is perceived from Dewey’s eyes to the point of making others one-D and hard to fathom… But the overall scheme of following this partly insane detective throughout New York City makes the Dewey Decimal System quite an unconventional pleasure to read and I am looking forward the next story in the series.

Trinity

Posted in Wines with tags , , on November 20, 2011 by xi'an

Selecting statistics for [ABC] Bayesian model choice

Posted in Statistics, University life with tags , , , , , , , , , on October 25, 2011 by xi'an

At last, we have completed, arXived, and submitted our paper on the evaluation of summary statistics for Bayesian model choice! (I had presented preliminary versions at the recent workshops in New York and Zürich.) While broader in scope, the results obtained by Judith Rousseau, Jean-Michel Marin, Natesh Pillai, and myself bring an answer to the question raised by our PNAS paper on ABC model choice. Almost as soon as we realised the problem, that is, during MCMC’Ski in Utah, I talked with Judith about a possible classification of statistics in terms of their Bayes factor performances and we started working on that… While the idea of separating the mean behaviour of the statistics under both model came rather early, establishing a complete theoretical framework that validated this intuition took quite a while and the assumptions changed a few times around the summer. The simulations associated with the paper were straightforward in that (a) the setup had been suggested to us by a referee of our PNAS paper: compare normal and Laplace distributions with different summary statistics (inc. the median absolute deviation), (b) the theoretical results told us what to look for, and (c) they did very clearly exhibit the consistency and inconsistency of the Bayes factor/posterior probability predicted by the theory. Both boxplots shown here exhibit this agreement: when using (empirical) mean, median, and variance to compare normal and Laplace models, the posterior probabilities do not select the “true” model but instead aggregate near a fixed value. When using instead the median absolute deviation as summary statistic, the posterior probabilities concentrate near one or zero depending on whether or not the normal model is the true model.

The main result states that, under some “heavy-duty” assumptions, (a) if the “true” mean of the summary statistic can be recovered for both models under comparison, then the Bayes factor has the same asymptotic behaviour as n to the power -(d1 – d2)/2, irrespective of which one is the true model. (The dimensions d1 and  d2 are the effective dimensions of the asymptotic means of the summary statistic under both models.) Therefore, the Bayes factor always asymptotically selects the model having the smallest effective dimension and cannot be consistent. (b) if, instead, the “true” mean of the summary statistic cannot be represented in the other model, then the Bayes factor  is consistent. This means that, somehow, the best statistics to be used in an ABC approximation to a Bayes factor are ancillary statistics with different mean values under both models. Else, the summary statistic must have enough components to prohibit a parameter under the “wrong” model to meet the “true” mean of the summary statistic.

(As a striking coincidence, Hélene Massam and Géard Letac [re]posted today on arXiv a paper about the behaviour of the Bayes factor for contingency tables when the hyperparameter goes to zero, where they establish the consistency of the said Bayes factor under the sparser model. No Jeffreys-Lindley paradox in that case.)

another road-race in Central Park

Posted in Running, Travel with tags , , , , , , on September 30, 2011 by xi'an

It seems that every Sunday I run in Central Park, I am doomed to hit a race! This time it was not the NYC half-marathon (and I did not see Paula Radcliffe as she was in Berlin) but an 18 miles race in preparation for the NYC marathon. I had completed my fartlek training of 6x4mn and was recovering from a anaerobic last round when I saw some runners coming, so went with them as a recuperation jog for a mile or so. They had done the first 4 miles in 27’28”, which corresponds to a 4’16” pace per kilometer, so I must have missed the top runners. Actually, I think the first runners were at least 4 minutes faster, as they were coming when I left for the last 4mn. (But it was good for recovery!) Checking on the webpage of the race, the winner finished in 1:37’45”, which gives a marathon time of 2:21’40” unless I am confused.

workshop in Columbia [talk]

Posted in Statistics, Travel, University life with tags , , , , , , , , , on September 25, 2011 by xi'an

Here are the slides of my talk yesterday at the Computational Methods in Applied Sciences workshop in Columbia:

The last section of the talk covers our new results with Jean-Michel Marin, Natesh Pillai and Judith Rousseau on the necessary and sufficient conditions for a summary statistic to be used in ABC model choice. (The paper is about to be completed.) This obviously comes as the continuation of our reflexions on  ABC model choice started last January. The major message of the paper is that the statistics used for running model choice cannot have a mean value common to both models, which strongly implies using ancillary statistics with different means under each model. (I am afraid that, thanks to the mixture of no-jetlag fatigue and of slide inflation [95 vs. 40mn] and of asymptotics technicalities in the last part, the talk was far from comprehensible. I started on the wrong foot with not getting an XL [Xiao-Li's] comment on the measure-theory problem with the limit in ε going to zero. A peak given that great debate we had in Banff with Jean-Michel, David Balding, and Mark Beaumont, years ago. And our more recent paper about the arbitrariness of the density value in the Savage-Dickey paradox. I then compounded the confusion by stating the empirical mean was sufficient in the Laplace case…which is not even an exponential family. I hope I will be more articulate next week in Zürich where at least I will not speak past my bedtime!)

workshop in Columbia

Posted in Statistics, Travel, University life with tags , , , , , , , , , on September 24, 2011 by xi'an

The workshop in Columbia University on Computational Methods in Applied Sciences is quite diverse in its topics.  Reminding me of the conference on Efficient Monte Carlo in Sandbjerg Estate, Sønderborg in 2008, celebrating the 70th birthday of Reuven Rubinstein, incl. some colleagues I had not met since this meeting. Yesterday I thus heard (quite interesting) talks on domains somehow far from my own, from Robert Adler on cohomology (giving a second look  at the thing after the talk I head in Wharton last year), to José Blanchet on simulation for infinite server queues (with a link to perfect sampling I could not exactly trace but that was certainly there). Several of the talks made me think of our Brownian motion confidence band paper, with Wilfrid Kendall and Jean-Michel Marin, esp. Gennady Samorodnitsky’s on the maximum of stochastic processes (and wonder whether we could have gone further in that direction). Pierre Del Moral presented a broad overview of the Feynman-Kacs’ approaches to particle methods, in particular particle MCMC, with application to some financial objects. Paul Glasserman talked about robust MCMC, which I found quite an appealing concept in that it included uncertainties about the model itself. And linked with minimax concepts. And Paul Dupuis exposed a parallel tempering method linked with large deviations, whose paper I am definitely looking forward. Now it is more than time to work on my own talk! (On a very personal basis, I sadly lost my sturdy Canon camera in the taxi from the airport! Will need a new one for the ‘Og!)

Follow

Get every new post delivered to your Inbox.

Join 634 other followers