39ièmes Foulées de Malakoff

Posted in Running with tags , , , , on February 17, 2013 by xi'an

I ran my annual 5k in Malakoff on Saturday, recovering from my flu in a much milder climate than last year. Thanks to the flu, I was out of breath after the first K and did poorly (3:38-3:58-3:53-4:00-3:54, ending in 19:26…) I still finished second in my V2 category. This was the first time I was running a race with my daughter, who also finished second (and last!) in her category. (It was about 15 degrees (C) warmer than last year and I think it helped towards fighting the leftovers of the flu! The radio was playing the 6th Brandenburg concerto again, on the way back. Groundhog Day syndroms…?!)

art brut

Posted in pictures with tags , , , , , , on December 12, 2012 by xi'an

seminar at CREST on predictive estimation

Posted in pictures, Statistics, University life with tags , , , , , , , , on March 6, 2012 by xi'an

On Thursday, March 08, Éric Marchand (from Université de Sherbrooke, Québec, where I first heard of MCMC!, and currently visiting Université de Montpellier 2) will give a seminar at CREST. It is scheduled at 2pm in ENSAE (ask the front desk for the room!) and is related to a recent EJS paper with Dominique Fourdrinier, Ali Righi, and Bill Strawderman: here is the abstract from the paper (sorry, the pictures from Roma are completely unrelated, but I could not resist!):

We consider the problem of predictive density estimation for normal models under Kullback-Leibler loss (KL loss) when the parameter space is constrained to a convex set. More particularly, we assume that

$X \sim \mathcal{N}_p(\mu,v_x\mathbf{I})$

is observed and that we wish to estimate the density of

$Y \sim \mathcal{N}_p(\mu,v_y\mathbf{I})$

under KL loss when μ is restricted to the convex set C⊂ℝp. We show that the best unrestricted invariant predictive density estimator p̂U is dominated by the Bayes estimator p̂πC associated to the uniform prior πC on C. We also study so called plug-in estimators, giving conditions under which domination of one estimator of the mean vector μ over another under the usual quadratic loss, translates into a domination result for certain corresponding plug-in density estimators under KL loss. Risk comparisons and domination results are also made for comparisons of plug-in estimators and Bayes predictive density estimators. Additionally, minimaxity and domination results are given for the cases where: (i) C is a cone, and (ii) C is a ball.

winter trees (2)

Posted in pictures, University life with tags , , , , , on February 17, 2012 by xi'an