Archive for RER B

me no savi [travel madness]

Posted in Statistics with tags , , , , , , , , on June 1, 2022 by xi'an

Today, I left home in the wee hours, after watering my tomatoes!, quite excited to join the Safe, Anytime-Valid Inference (SAVI) workshop in Eindhoven, which was taking place after two years of postponement. I alas did not check the state of the train traffic beforehand and when I reached the train station I found that part of the line to De Gaulle airport was closed, due to some control cables being stolen last night. Things quickly deteriorated as the train management in Gare du Nord was pretty inefficient, meaning that the trains would stop for five minutes at each station, and that there was no rail alternative to reach Roissy. The taxi stand was a complete mess, with no queue whatsoever, and the Parisian taxis kept true to their reputation, by refusing to take people to the airport, asking for outrageous prices (60 euros per passenger), and stopping anywhere. I almost managed to get one but he refused to take me on top of the Swede family I had directed to this stand from the RER train, and this was simply my last opportunity. Über taxis were invisible and I soon realised I could not catch my flight. Later flights were outrageously expensive and there was not train seat whatsoever till the day after, so I gave up and returned home from this trip to nowhere…

approximate Bayesian inference under informative sampling

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , on March 30, 2018 by xi'an

In the first issue of this year Biometrika, I spotted a paper with the above title, written by Wang, Kim, and Yang, and thought it was a particular case of ABC. However, when I read it on a rare metro ride to Dauphine, thanks to my hurting knee!, I got increasingly disappointed as the contents had nothing to do with ABC. The purpose of the paper was to derive a consistent and convergent posterior distribution based on a estimator of the parameter θ that is… consistent and convergent under informative sampling. Using for instance a Normal approximation to the sampling distribution of this estimator. Or to the sampling distribution of the pseudo-score function, S(θ) [which pseudo-normality reminded me of Ron Gallant’s approximations and of my comments on them]. The paper then considers a generalisation to the case of estimating equations, U(θ), which may again enjoy a Normal asymptotic distribution. Involving an object that does not make direct Bayesian sense, namely the posterior of the parameter θ given U(θ)…. (The algorithm proposed to generate from this posterior (8) is also a mystery.) Since the approach requires consistent estimators to start with and aims at reproducing frequentist coverage properties, I am thus at a loss as to why this pseudo-Bayesian framework is adopted.

simulation in Gare du Nord [jatp]

Posted in Statistics with tags , , , , , , , , on January 30, 2018 by xi'an

O’Bayes in action

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , on November 7, 2017 by xi'an

My next-door colleague [at Dauphine] François Simenhaus shared a paradox [to be developed in an incoming test!] with Julien Stoehr and I last week, namely that, when selecting the largest number between a [observed] and b [unobserved], drawing a random boundary on a [meaning that a is chosen iff a is larger than this boundary] increases the probability to pick the largest number above ½2…

When thinking about it in the wretched RER train [train that got immobilised for at least two hours just a few minutes after I went through!, good luck to the passengers travelling to the airport…] to De Gaulle airport, I lost the argument: if a<b, the probability [for this random bound] to be larger than a and hence for selecting b is 1-Φ(a), while, if a>b, the probability [of winning] is Φ(a). Hence the only case when the probability is ½ is when a is the median of this random variable. But, when discussing the issue further with Julien, I exposed an interesting non-informative prior characterisation. Namely, if I assume a,b to be iid U(0,M) and set an improper prior 1/M on M, the conditional probability that b>a given a is ½. Furthermore, the posterior probability to pick the right [largest] number with François’s randomised rule is also ½, no matter what the distribution of the random boundary is. Now, the most surprising feature of this coffee room derivation is that these properties only hold for the prior 1/M. Any other power of M will induce an asymmetry between a and b. (The same properties hold when a,b are iid Exp(M).)  Of course, this is not absolutely unexpected since 1/M is the invariant prior and since the “intuitive” symmetry only holds under this prior. Power to O’Bayes!

When discussing again the matter with François yesterday, I realised I had changed his wording of the puzzle. The original setting is one with two cards hiding the unknown numbers a and b and of a player picking one of the cards. If the player picks a card at random, there is indeed a probability of ½ of picking the largest number. If the decision to switch or not depends on an independent random draw being larger or smaller than the number on the observed card, the probability to get max(a,b) in the end hits 1 when this random draw falls into (a,b) and remains ½ outside (a,b). Randomisation pays.

trip to München

Posted in Mountains, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , on October 19, 2015 by xi'an

While my train ride to the fabulous De Gaulle airport was so much delayed that I had less than ten minutes from jumping from the carriage to sitting in my plane seat, I handled the run through security and the endless corridors of the airport in the allotted time, and reached Munich in time for my afternoon seminar and several discussions that prolonged into a pleasant dinner of Wiener Schnitzel and Eisbier.  This was very exciting as I met physicists and astrophysicists involved in population Monte Carlo and parallel MCMC and manageable harmonic mean estimates and intractable ABC settings (because simulating the data takes eons!). I wish the afternoon could have been longer. And while this is the third time I come to Munich, I still have not managed to see the centre of town! Or even the nearby mountains. Maybe an unsuspected consequence of the Heisenberg principle…

%d bloggers like this: