Taking advantage of being in San Francisco, I flew yesterday to Australia over the Pacific, crossing for the first time the day line. The 15 hour Qantas flight to Sydney was remarkably smooth and quiet, with most passengers sleeping for most of the way, and it gave me a great opportunity to go over several papers I wanted to read and review. Over the next week or so, I will work with my friends and co-authors David Frazier and Gael Martin at Monash University (and undoubtedly enjoy the great food and wine scene!). Before flying back to Paris (alas via San Francisco rather than direct).
Archive for the Wines Category
– PhD in applied mathematics or in statistics with a strong mathematical background
– Enthusiastic interest in research in Bayesian statistics, exemplified through publications in international journals in topics including, but not limited to, Bayesian non-parametric methods, Bayesian inference for high-dimensional and complex data, Bayesian time series analysis and state space modelling, efficient Markov chain Monte Carlo methods
– Interest in applications in economics, finance, and business
– Excellent programming skills (e.g. in R or Matlab)
– German language skills are not a prerequisite
Here are the details for those interested in this exciting opportunity!
After our aborted attempt at Monte Rosa, Abele Blanc treated us to a quick visit to Forte di Bard, a 19th Century military fortress in the Valley of Aosta [a first version of which was razed by Napoleon’s troops in 1800] on top the medieval village of Bard. Ironically, the current fortress never saw action as Napoleon’s siege was the last invasion of the kingdom of Savoy by French troops.
The buildings are impressive, so seamlessly connected to the rock spur that supports them that they appear to have grown out of it. They reminded me of Vauban’s fortresses, with the feeling that they were already outdated when they got built. (On the French Savoy side, there is a series of fortresses that similarly faced no battle as they were designed to keep the French out, becoming overnight useless when this part of Savoy was ceded to France in exchange for its support of the unification of Italy. For instance, there is such a fort in Aussois, which now houses an hostel, a gastronomical restaurant [we enjoyed at O’Bayes 03], and a via ferrata…)
The fortress has been recently and beautifully renovated with the help of the Italian State and of the European Union. It houses conferences and art exhibits. Like those on Marc Chagall and Elliot Erwitt that we briefly saw, missing the massive museum of the Alps… A few dozen kilometers from Torino, it would be a perfect location for a small workshop, albeit not large enough for a future MCMski.
Before attending MCqMC in Stanford in two weeks, I will take some vacations in Northern California [really North!] with my family. Starting with the San Francisco ½ marathon tomorrow. So expect delays [as we already got stuck
six twenty-seven thirty hours in De Gaulle airport thanks to a strike!] and mostly non-statistical entries on the ‘Og. And pictures.
Just mentioning that a second version of our paper has been arXived and submitted to JMLR, the main input being the inclusion of a reference to the abcrf package. And just repeating our best selling arguments that (i) forests do not require a preliminary selection of the summary statistics, since an arbitrary number of summaries can be used as input for the random forest, even when including a large number of useless white noise variables; (b) there is no longer a tolerance level involved in the process, since the many trees in the random forest define a natural if rudimentary distance that corresponds to being or not being in the same leaf as the observed vector of summary statistics η(y); (c) the size of the reference table simulated from the prior (predictive) distribution does not need to be as large as for in usual ABC settings and hence this approach leads to significant gains in computing time since the production of the reference table usually is the costly part! To the point that deriving a different forest for each univariate transform of interest is truly a minor drag in the overall computing cost of the approach.