Archive for CRiSM

four positions at Warwick Statistics, apply!

Posted in pictures, Statistics, Travel, University life with tags , , , , , , on March 26, 2018 by xi'an

Enthusiastic and excellent academics are sought to be part of our Department of Statistics at Warwick, one of the world’s most prominent and most research active departments of Statistics! We are advertising four posts in total, which reflects the strong commitment of the University of Warwick to invest in Statistics. We intend to fill the following positions:

  • Assistant or Associate Professor of Statistics (two positions)

  • Harrison Early Career Assistant Professor of Statistics (two positions)

You will have expertise in statistics (to be interpreted in the widest sense and to include both applied and methodological statistics, probability, probabilistic operational research and mathematical finance together with interdisciplinary topics involving one or more of these areas) and you will help shape research and teaching leadership in this fast-developing discipline. Applicants for senior positions should have an excellent publication record and proven ability to secure research funding. Applicants for more junior positions should show exceptional promise to become leading academics.

While the posts are open to applicants with expertise in any field of statistics (widely interpreted as above), the Department is particularly interested in strengthening its existing group in Data Science. The Department is heavily involved in the Warwick Data Science Institute and the Alan Turing Institute, the national institute for data science, headquartered in London. If interested, a successful candidate can apply to spend part of their time at the Alan Turing Institute as a Turing Fellow.

Closing date: 10 April 2018 for the posts.

Informal enquires can be addressed to Professors Mark Steel, Gareth Roberts, and David Firth or to any other senior member of the Warwick Statistics Department. Applicants at Assistant/Associate levels should ask their referees to send letters of recommendation by the closing date to the Departmental Administrator, Mrs Paula Matthews.

In addition to any specific positions announced above, the Department of Statistics strongly encourages applicants for funded, open research fellowship competitions to consider holding their fellowship in statistics or probability at Warwick.

summer school on computational statistics [deadline]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on February 23, 2018 by xi'an

Reminding ‘Og’s readers and others that the early bird registration deadline for our LMS/CRiSM summer school on computational statistics at the University of Warwick, 9-13 July, 2018 is next Thursday, March 01, 2018. This also applies for bursary applications, so do not dally and apply now!

the definitive summer school poster

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , on January 8, 2018 by xi'an

four positions at Warwick Statistics, apply!

Posted in Statistics with tags , , , , , , , on November 21, 2017 by xi'an

Enthusiastic and excellent academics are sought to be part of our Department of Statistics at Warwick, one of the world’s most prominent and most research active departments of Statistics. We are advertising four posts in total, which reflects the strong commitment of the University of Warwick to invest in Statistics. We intend to fill the following positions:

  • Assistant or Associate Professor of Statistics (two positions)

  • Reader of Statistics

  • Full Professor of Statistics.

All posts are permanent, with posts at the Assistant level subject to probation.

You will have expertise in statistics (to be interpreted in the widest sense and to include both applied and methodological statistics, probability, probabilistic operational research and mathematical finance together with interdisciplinary topics involving one or more of these areas) and you will help shape research and teaching leadership in this fast-developing discipline. Applicants for senior positions should have an excellent publication record and proven ability to secure research funding. Applicants for more junior positions should show exceptional promise to become leading academics.

While the posts are open to applicants with expertise in any field of statistics (widely interpreted as above), the Department is particularly interested in strengthening its existing group in Data Science. The Department is heavily involved in the Warwick Data Science Institute and the Alan Turing Institute, the national institute for data science, headquartered in London. If interested, a successful candidate can apply to spend part of their time at the Alan Turing Institute as a Turing Fellow.

Closing date: 3 January 2018 for the Assistant/Associate level posts and 10 January 2018 for the Full Professor position.

Informal enquires can be addressed to Professors Mark Steel, Gareth Roberts, and David Firth or to any other senior member of the Warwick Statistics Department. Applicants at Assistant/Associate levels should ask their referees to send letters of recommendation by the closing date to the Departmental Administrator, Mrs Paula Matthews.

estimating constants [survey]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on February 2, 2017 by xi'an

A new survey on Bayesian inference with intractable normalising constants was posted on arXiv yesterday by Jaewoo Park and Murali Haran. A rather massive work of 58 pages, almost handy for a short course on the topic! In particular, it goes through the most common MCMC methods with a detailed description, followed by comments on components to be calibrated and the potential theoretical backup. This includes for instance the method of Liang et al. (2016) that I reviewed a few months ago. As well as the Wang-Landau technique we proposed with Yves Atchadé and Nicolas Lartillot. And the noisy MCMC of Alquier et al. (2016), also reviewed a few months ago. (The Russian Roulette solution is only mentioned very briefly as” computationally very expensive”. But still used in some illustrations. The whole area of pseudo-marginal MCMC is also missing from the picture.)

“…auxiliary variable approaches tend to be more efficient than likelihood approximation approaches, though efficiencies vary quite a bit…”

The authors distinguish between MCMC methods where the normalizing constant is approximated and those where it is omitted by an auxiliary representation. The survey also distinguishes between asymptotically exact and asymptotically inexact solutions. For instance, using a finite number of MCMC steps instead of the associated target results in an asymptotically inexact method. The question that remains open is what to do with the output, i.e., whether or not there is a way to correct for this error. In the illustration for the Ising model, the double Metropolis-Hastings version of Liang et al. (2010) achieves for instance massive computational gains, but also exhibits a persistent bias that would go undetected were it the sole method implemented. This aspect of approximate inference is not really explored in the paper, but constitutes a major issue for modern statistics (and machine learning as well, when inference is taken into account.)

In conclusion, this survey provides a serious exploration of recent MCMC methods. It begs for a second part involving particle filters, which have often proven to be faster and more efficient than MCMC methods, at least in state space models. In that regard, Nicolas Chopin and James Ridgway examined further techniques when calling to leave the Pima Indians [dataset] alone.

contemporary issues in hypothesis testing

Posted in Statistics with tags , , , , , , , , , , , , , , , , , , on September 26, 2016 by xi'an

hipocontemptThis week [at Warwick], among other things, I attended the CRiSM workshop on hypothesis testing, giving the same talk as at ISBA last June. There was a most interesting and unusual talk by Nick Chater (from Warwick) about the psychological aspects of hypothesis testing, namely about the unnatural features of an hypothesis in everyday life, i.e., how far this formalism stands from human psychological functioning.  Or what we know about it. And then my Warwick colleague Tom Nichols explained how his recent work on permutation tests for fMRIs, published in PNAS, testing hypotheses on what should be null if real data and getting a high rate of false positives, got the medical imaging community all up in arms due to over-simplified reports in the media questioning the validity of 15 years of research on fMRI and the related 40,000 papers! For instance, some of the headings questioned the entire research in the area. Or transformed a software bug missing the boundary effects into a major flaw.  (See this podcast on Not So Standard Deviations for a thoughtful discussion on the issue.) One conclusion of this story is to be wary of assertions when submitting a hot story to journals with a substantial non-scientific readership! The afternoon talks were equally exciting, with Andrew explaining to us live from New York why he hates hypothesis testing and prefers model building. With the birthday model as an example. And David Draper gave an encompassing talk about the distinctions between inference and decision, proposing a Jaynes information criterion and illustrating it on Mendel‘s historical [and massaged!] pea dataset. The next morning, Jim Berger gave an overview on the frequentist properties of the Bayes factor, with in particular a novel [to me] upper bound on the Bayes factor associated with a p-value (Sellke, Bayarri and Berger, 2001)

B¹⁰(p) ≤ 1/-e p log p

with the specificity that B¹⁰(p) is not testing the original hypothesis [problem] but a substitute where the null is the hypothesis that p is uniformly distributed, versus a non-parametric alternative that p is more concentrated near zero. This reminded me of our PNAS paper on the impact of summary statistics upon Bayes factors. And of some forgotten reference studying Bayesian inference based solely on the p-value… It is too bad I had to rush back to Paris, as this made me miss the last talks of this fantastic workshop centred on maybe the most important aspect of statistics!

retrospective Monte Carlo

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , on July 12, 2016 by xi'an

the pond in front of the Zeeman building, University of Warwick, July 01, 2014The past week I spent in Warwick ended up with a workshop on retrospective Monte Carlo, which covered exact sampling, debiasing, Bernoulli factory problems and multi-level Monte Carlo, a definitely exciting package! (Not to mention opportunities to go climbing with some participants.) In particular, several talks focussed on the debiasing technique of Rhee and Glynn (2012) [inspired from von Neumann and Ulam, and already discussed in several posts here]. Including results in functional spaces, as demonstrated by a multifaceted talk by Sergios Agapiou who merged debiasing, deburning, and perfect sampling.

From a general perspective on unbiasing, while there exist sufficient conditions to ensure finite variance and aim at an optimal version, I feel a broader perspective should be adopted towards comparing those estimators with biased versions that take less time to compute. In a diffusion context, Chang-han Rhee presented a detailed argument as to why his debiasing solution achieves a O(√n) convergence rate in opposition the regular discretised diffusion, but multi-level Monte Carlo also achieves this convergence speed. We had a nice discussion about this point at the break, with my slow understanding that continuous time processes had much much stronger reasons for sticking to unbiasedness. At the poster session, I had the nice surprise of reading a poster on the penalty method I discussed the same morning! Used for subsampling when scaling MCMC.

On the second day, Gareth Roberts talked about the Zig-Zag algorithm (which reminded me of the cigarette paper brand). This method has connections with slice sampling but it is a continuous time method which, in dimension one, means running a constant velocity particle that starts at a uniform value between 0 and the maximum density value and proceeds horizontally until it hits the boundary, at which time it moves to another uniform. Roughly. More specifically, this approach uses piecewise deterministic Markov processes, with a radically new approach to simulating complex targets based on continuous time simulation. With computing times that [counter-intuitively] do not increase with the sample size.

Mark Huber gave another exciting talk around the Bernoulli factory problem, connecting with perfect simulation and demonstrating this is not solely a formal Monte Carlo problem! Some earlier posts here have discussed papers on that problem, but I was unaware of the results bounding [from below] the expected number of steps to simulate B(f(p)) from a (p,1-p) coin. If not of the open questions surrounding B(2p). The talk was also great in that it centred on recursion and included a fundamental theorem of perfect sampling! Not that surprising given Mark’s recent book on the topic, but exhilarating nonetheless!!!

The final talk of the second day was given by Peter Glynn, with connections with Chang-han Rhee’s talk the previous day, but with a different twist. In particular, Peter showed out to achieve perfect or exact estimation rather than perfect or exact simulation by a fabulous trick: perfect sampling is better understood through the construction of random functions φ¹, φ², … such that X²=φ¹(X¹), X³=φ²(X²), … Hence,

X^t=\varphi^{t-1}\circ\varphi^{t-2}\circ\ldots\circ\varphi^{1}(X^1)

which helps in constructing coupling strategies. However, since the φ’s are usually iid, the above is generally distributed like

Y^t=\varphi^{1}\circ\varphi^{2}\circ\ldots\circ\varphi^{t-1}(X^1)

which seems pretty similar but offers a much better concentration as t grows. Cutting the function composition is then feasible towards producing unbiased estimators and more efficient. (I realise this is not a particularly clear explanation of the idea, detailed in an arXival I somewhat missed. When seen this way, Y would seem much more expensive to compute [than X].)