**F**irst, Peter Grünwald had to cancel his lectures at **Yes III** due to a severe flu, which was unfortunate both for him (!) and for the participants to the workshop. Indeed, I was quite interested in hearing about the/his latest developments on the minimum length encoding priors… The lectures by Laurie Davies and Niels Hjort did take place, however, and were quite informative from my perspective: Laurie Davies gave a very general lecture on the notion of approximation and regularisation in Statistics, with a lot of good questions about the nature of “truth” and “model”, which was quite appropriate for this meeting. There also was a kind of ABC flavour in his talk—which made a sort of a connection with mine—, in that models were generally tested by running virtual datasets and checking for adequacy of the observed model. Maybe a bit too ad-hoc and frequentist, as well as fundamentally dependent on the measure of adequacy (in a Vapnik-Cervonenkis sense), but still very interesting. (Of course, a Bayesian answer would also incorporate the consequence of a rejection by looking at the action under the alternative/rejection…) The second half of his lectures was about non-parametric regression, a topic I always find incompletely covered as to why and where the assumptions are made. But I think these lectures must have had a lasting impact on the young statisticians attending the workshop.

**N**iels Hjort first talked about the “quiet scandal of Statistics”, a nice sentence coined by Leo Breiman, which actually replies to some extent to the previous lectures in that he complained about the lack of accounting for the randomness/bias in selecting a model before working with it as if it was the “truth”. Another very interesting part of the lectures was dealing with his focussed information criterion (FIC), which adds to the menagerie of information criteria, but also has an interesting link with the pre-test and shrinkage literature of the 70’s and the 80’s. Selecting a model according to its estimated performances in terms of a common loss function is certainly of interest, even though incorporating everything within a single Bayesian framework would certainly be more coherent. Niels also included a fairly exciting data analysis about the authorship of the Novel Prize novel “** Quiet flows the Don**“, which he attributed to the Nobel Prize winner Sholokhov (solely on the basis of the length of the sentences). Most of his lecture covers material related to his recent book

**co-authored with Gerda Claeskens.**

*Model Selection and Model Averaging***M**y only criticism about the meeting is that, despite the relatively small audience, there was little interaction and discussion during the talks (which makes sense for my talk as there was hardly anyone, besides Nils Hjort, interested in computational Bayes!). The questions during the talks were mostly asked by the three senior lecturers and the debates as well. This certainly occurs in other young statisticians meetings, but I think the audience should be encouraged to participate, to debate and to criticise, because this is part of the job of being a researcher. Having for instance registered discussants would help.

**A**nother personnal regret is to have missed the opportunity to attend a concert of Jordi Savall who was playing on Tuesday night Marais’ Lecons de Ténèbres in Eindhoven…