In collaboration with the Met Office, my friend and Warwick colleague Rito Dutta is co-organising a two-day workshop in Warwick in July on the use of statistics and machine learning tools in weather prediction. Attendance is free, but registration needed for tea breaks.
Archive for University of Warwick
fusing simulation with data science [18-19 July 2023]
Posted in pictures, Running, Statistics, Travel, University life with tags climate change, CRiSM, data assimilation, data science, fusion, Met Office, PDEs, simulation, simulator model, solver, tea, University of Warwick, weather modelling, workshop on June 5, 2023 by xi'anfoxhuntshire
Posted in Books, pictures, Running, Travel, University life with tags chasse à courre, countryside, Coventry, fox, fox hunting, hunting, Kenilworth, Midlands, The New York Times, University of Warwick, Warwickshire on May 21, 2023 by xi'anpostdoctoral research position
Posted in Statistics, Travel, University life with tags #ERCSyG, ABC, Bayesian decision theory, data privacy, ERC, ERC Synergy Grant, European Research Council, federated learning, MCMC, multi-agent decision theory, Ocean, Paris, PariSanté campus, postdoctoral position, Université Paris Dauphine, University of California Berkeley, University of Warwick on April 27, 2023 by xi'anThrough the ERC Synergy grant OCEAN (On intelligenCE And Networks: Synergistic research in Bayesian Statistics, Microeconomics and Computer Sciences), I am seeking one postdoctoral researcher with an interest in Bayesian federated learning, distributed MCMC, approximate Bayesian inference, and data privacy.
The project is based at Université Paris Dauphine, on the new PariSanté Campus. The postdoc will join the OCEAN teams of researchers directed by Éric Moulines and Christian Robert to work on the above themes with multiple focus from statistical theory, to Bayesian methodology, to algorithms, to medical applications.
Qualifications
The candidate should hold a doctorate in statistics or machine learning, with demonstrated skills in Bayesian analysis and Monte Carlo methodology, a record of publications in these domains, and an interest in working as part of an interdisciplinary international team. Scientific maturity and research autonomy are a must for applying.
Funding
Besides a 2 year postdoctoral contract at Université Paris Dauphine (with possible extension for one year), at a salary of 31K€ per year, the project will fund travel to OCEAN partners’ institutions (University of Warwick or University of Berkeley) and participation to yearly summer schools. University benefits are attached to the position and no teaching duty is involved, as per ERC rules.
The postdoctoral work will begin 1 September 2023.
Application Procedure
To apply, preferably before 31 May, please send the following in one pdf to Christian Robert (bayesianstatistics@gmail.com).
- a letter of application,
- a CV,
- letters of recommendation sent directly by recommenders
top off…
Posted in Statistics with tags college ranking, International Statistical Review, John Wiley, PSL, Rao-Blackwellisation, survey, Université Paris Dauphine, University of Warwick on April 25, 2023 by xi'anthe Bayesian learning rule [One World ABC’minar, 27 April]
Posted in Books, Statistics, University life with tags ABC, Approximate Bayesian computation, approximate Bayesian inference, Bayesian deep learning, Bayesian learning, Kalman filter, Newton-Raphson algorithm, One World ABC Seminar, RIKEN, stochastic gradient descent, Tokyo, University of Warwick on April 24, 2023 by xi'anThe next One World ABC seminar is taking place (on-line, requiring pre-registration) on 27 April, 9:30am UK time, with Mohammad Emtiyaz Khan (RIKEN-AIP, Tokyo) speaking about the Bayesian learning rule:
We show that many machine-learning algorithms are specific instances of a single algorithm called the Bayesian learning rule. The rule, derived from Bayesian principles, yields a wide-range of algorithms from fields such as optimization, deep learning, and graphical models. This includes classical algorithms such as ridge regression, Newton’s method, and Kalman filter, as well as modern deep-learning algorithms such as stochastic-gradient descent, RMSprop, and Dropout. The key idea in deriving such algorithms is to approximate the posterior using candidate distributions estimated by using natural gradients. Different candidate distributions result in different algorithms and further approximations to natural gradients give rise to variants of those algorithms. Our work not only unifies, generalizes, and improves existing algorithms, but also helps us design new ones.