Archive for Kenilworth

crash

Posted in pictures, Travel, University life with tags , , , , , , on April 2, 2022 by xi'an

tree of the year 2015 felled for high speed train line

Posted in Statistics with tags , , , , , , , , , , , on November 5, 2020 by xi'an

my neighbourhood in the New Yorker

Posted in Books, pictures, Travel with tags , , , , , , , , , on October 14, 2020 by xi'an

While I was reading (part of) a recent issue of The New Yorker over breakfast, I was surprised to find my neighbourhood city of Bourg-la-Reine (twin city, Kenilworth!) mentioned in a tribune! It was about interviewing the local authors of a boardgame called Kapital! which is presented there as a form of (French) anti-Monopoly, authors who were both CNRS researchers in sociology until they retired. And who produced (even) more militant ouput since then, including this boardgame. The (unintended?) fun in the tribune is the opposition between the May 68 style class warfare denounced by the authors and their apparently well-off conditions (no BBQ in the street there!).

postgraduate open day at Warwick [4 Dec]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , on November 12, 2019 by xi'an

The department of Statistics at the University of Warwick is holding an open day for prospective PhD students on 4 December 2019, starting at 2pm (with free lunch at 1pm). In the Mathematical Sciences Building common room (room MB1.02). The Director of Graduate Studies, Professor Mark Steel, and the PhD admissions tutors Professors Martyn Plummer and Barbel Finkelstadt Rand will give short presentations about what it means to do a PhD, what it means to do it at Warwick, the benefits of a PhD degree, and the application process.

Subsequently there will be an informal meeting, during which students have the possibility to ask questions and find out more about the different PhD opportunities at Warwick Statistics; in fact, we offer a very broad range of possibilities, giving a lot of choice for potential applicants. Current members of staff will be invited to participate, to discuss potential projects.

UK travel expenses will be covered by the Department of Statistics (standard class travel by public transport with pre-booked tickets). Please register if interested in this event.

O’Bayes 19/2

Posted in Books, pictures, Running, Travel, University life with tags , , , , , , , , , , , , , , , , , on July 1, 2019 by xi'an

One talk on Day 2 of O’Bayes 2019 was by Ryan Martin on data dependent priors (or “priors”). Which I have already discussed in this blog. Including the notion of a Gibbs posterior about quantities that “are not always defined through a model” [which is debatable if one sees it like part of a semi-parametric model]. Gibbs posterior that is built through a pseudo-likelihood constructed from the empirical risk, which reminds me of Bissiri, Holmes and Walker. Although requiring a prior on this quantity that is  not part of a model. And is not necessarily a true posterior and not necessarily with the same concentration rate as a true posterior. Constructing a data-dependent distribution on the parameter does not necessarily mean an interesting inference and to keep up with the theme of the conference has no automated claim to [more] “objectivity”.

And after calling a prior both Beauty and The Beast!, Erlis Ruli argued about a “bias-reduction” prior where the prior is solution to a differential equation related with some cumulants, connected with an earlier work of David Firth (Warwick).  An interesting conundrum is how to create an MCMC algorithm when the prior is that intractable, with a possible help from PDMP techniques like the Zig-Zag sampler.

While Peter Orbanz’ talk was centred on a central limit theorem under group invariance, further penalised by being the last of the (sun) day, Peter did a magnificent job of presenting the result and motivating each term. It reminded me of the work Jim Bondar was doing in Ottawa in the 1980’s on Haar measures for Bayesian inference. Including the notion of amenability [a term due to von Neumann] I had not met since then. (Neither have I met Jim since the last summer I spent in Carleton.) The CLT and associated LLN are remarkable in that the average is not over observations but over shifts of the same observation under elements of a sub-group of transformations. I wondered as well at the potential connection with the Read Paper of Kong et al. in 2003 on the use of group averaging for Monte Carlo integration [connection apart from the fact that both discussants, Michael Evans and myself, are present at this conference].

%d bloggers like this: