Archive for Kenilworth

postgraduate open day at Warwick [4 Dec]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , on November 12, 2019 by xi'an

The department of Statistics at the University of Warwick is holding an open day for prospective PhD students on 4 December 2019, starting at 2pm (with free lunch at 1pm). In the Mathematical Sciences Building common room (room MB1.02). The Director of Graduate Studies, Professor Mark Steel, and the PhD admissions tutors Professors Martyn Plummer and Barbel Finkelstadt Rand will give short presentations about what it means to do a PhD, what it means to do it at Warwick, the benefits of a PhD degree, and the application process.

Subsequently there will be an informal meeting, during which students have the possibility to ask questions and find out more about the different PhD opportunities at Warwick Statistics; in fact, we offer a very broad range of possibilities, giving a lot of choice for potential applicants. Current members of staff will be invited to participate, to discuss potential projects.

UK travel expenses will be covered by the Department of Statistics (standard class travel by public transport with pre-booked tickets). Please register if interested in this event.

O’Bayes 19/2

Posted in Books, pictures, Running, Travel, University life with tags , , , , , , , , , , , , , , , , , on July 1, 2019 by xi'an

One talk on Day 2 of O’Bayes 2019 was by Ryan Martin on data dependent priors (or “priors”). Which I have already discussed in this blog. Including the notion of a Gibbs posterior about quantities that “are not always defined through a model” [which is debatable if one sees it like part of a semi-parametric model]. Gibbs posterior that is built through a pseudo-likelihood constructed from the empirical risk, which reminds me of Bissiri, Holmes and Walker. Although requiring a prior on this quantity that is  not part of a model. And is not necessarily a true posterior and not necessarily with the same concentration rate as a true posterior. Constructing a data-dependent distribution on the parameter does not necessarily mean an interesting inference and to keep up with the theme of the conference has no automated claim to [more] “objectivity”.

And after calling a prior both Beauty and The Beast!, Erlis Ruli argued about a “bias-reduction” prior where the prior is solution to a differential equation related with some cumulants, connected with an earlier work of David Firth (Warwick).  An interesting conundrum is how to create an MCMC algorithm when the prior is that intractable, with a possible help from PDMP techniques like the Zig-Zag sampler.

While Peter Orbanz’ talk was centred on a central limit theorem under group invariance, further penalised by being the last of the (sun) day, Peter did a magnificent job of presenting the result and motivating each term. It reminded me of the work Jim Bondar was doing in Ottawa in the 1980’s on Haar measures for Bayesian inference. Including the notion of amenability [a term due to von Neumann] I had not met since then. (Neither have I met Jim since the last summer I spent in Carleton.) The CLT and associated LLN are remarkable in that the average is not over observations but over shifts of the same observation under elements of a sub-group of transformations. I wondered as well at the potential connection with the Read Paper of Kong et al. in 2003 on the use of group averaging for Monte Carlo integration [connection apart from the fact that both discussants, Michael Evans and myself, are present at this conference].

troubling trends in machine learning

Posted in Books, pictures, Running, Statistics, University life with tags , , , , , , , , , , , , , on July 25, 2018 by xi'an

This morning, in Coventry, while having an n-th cup of tea after a very early morning run (light comes early at this time of the year!), I spotted an intriguing title in the arXivals of the day, by Zachary Lipton and Jacob Steinhard. Addressing the academic shortcomings of machine learning papers. While I first thought little of the attempt to address poor scholarship in the machine learning literature, I read it with growing interest and, although I am pessimistic at the chances of inverting the trend, considering the relentless pace and massive production of the community, I consider the exercise worth conducting, if only to launch a debate on the excesses found in the literature.

“…desirable characteristics:  (i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence; (ii) describe empirical investigations that consider and rule out alternative hypotheses; (iii) make clear the relationship between theoretical analysis and intuitive or empirical claims; and (iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts”

The points made by the authors are (p.1)

  1. Failure to distinguish between explanation and speculation
  2. Failure to identify the sources of empirical gains
  3. Mathiness
  4. Misuse of language

Again, I had misgiving about point 3., but this is not an anti-maths argument, rather about the recourse to vaguely connected or oversold mathematical results as a way to support a method.

Most interestingly (and living dangerously!), the authors select specific papers to illustrate their point, picking from well-established authors and from their own papers, rather than from junior authors. And also include counter-examples of papers going the(ir) right way. Among the recommendations for emerging from the morass of poor scholarship papers, they suggest favouring critical writing and retrospective surveys (provided authors can be found for these!). And mention open reviews before I can mention these myself. One would think that published anonymous reviews are a step in the right direction, I would actually say that this should be the norm (plus or minus anonymity) for all journals or successors of journals (PCis coming strongly to mind). But requiring more work from the referees implies rewards for said referees, as done in some biology and hydrology journals I refereed for (and PCIs of course).

running by Kenilworth Castle [jatp]

Posted in Kids, pictures, Running, Travel with tags , , , , , , , , , , , , , on May 26, 2018 by xi'an

Last week, while in Warwick, I had a nice warm afternoon run around Kenilworth in the fields with Nick Tawn, who brought us to this view of the castle from the West, by a former shallow lake called The Mere [lexicographically connected with La Mare rather than with La Mer!]. It also exposed the fact that my first and only visit to the castle was in the summer of 1977, with my pen friend from Birmingham. This was also the summer when Star Wars was released in Britain, including Birmingham where we first saw it…

Avon river

Posted in pictures, Running, Travel with tags , , , , , , , , on June 3, 2016 by xi'an

bridge upon the Avon river, Stoneleigh, Warwickshire, May 31, 2016

Stoneleigh Abbey

Posted in pictures, Running, Travel with tags , , , , , , on June 2, 2016 by xi'an

another glorious sunrise in Warwickshire

Posted in pictures, Running, Travel, University life with tags , , , , , , , on April 30, 2016 by xi'an