And why now focus so much on the specific function class/prior informativeness when *in practice* often qMC works nicely (after taking “some” care independent of certain characteristics)? Aside of the methods explicitly exploiting smoothness, of course. The devil is (also) in the implicit constant, not just the asymptotic rate of convergence.

Anyway a.t.m. I find this perspective intriguing irrespective of ever finding out it’s really oversold, or paradigm-changing (sorry for the p-word).

PS: Frances Kuo is a star, there are way lesser known brilliant minds in qMC.

PPS: there’s also some quasi-MCMC work being done, although slowly.

Oh lord no!

Firstly, pretty much the totality of my thoughts on this is contained here or in the comment on X’s post on the Girolami etc Proc Royal Soc A paper. I don’t like it. I think it’s horrifically oversold. And, like anyone else venturing an opinion “early” in something’s development, I’m perfectly happy to be shown to be wrong.

Secondly, I do my best not to spend time on professional things I don’t enjoy. Writing a four page screed against probabilistic numerics (or a four page measured reflection on why I’m unconvinced about the role of Bayesian analysis in this field) is about as far from “things I find fun” as I can imagine venturing professionally. And dear god going to a workshop around this topic would be, for me, a joyless drudge. (Obviously, those who find it fun should go crazy. I’m a decent statistician and a middling numericist who has his mind on a different set of problems. Not by accident)

Now, if the workshop organisers had (as Sondheim’s version of Georges Seurat suggested*) a link to their tradition and were instead organising a workshop on high dimensional integration and approximation (same problem, important distinction) that involved statisticians, machine learners, functional analysts, approximation theorists and numerical analysts**, then I would be there in a heartbeat. (I still wouldn’t submit because I have nothing not obvious to say on this topic) This is not that workshop.

*its in Montreal. Some North American Francophilia was in order.

** if anyone is looking for one, Frances Kuo from UNSW has produced some seriously interesting, if under-read work, on high dimensional integration for statistical problems.

]]>This has happened to me before. For some reason my phone really likes correcting words to “love”.

The most awkward was when I was sending someone an ad for a (still open) lectureship in Bath and I wanted to write “In case you’re looking for a foreign move…” and my phone decided that what I really wanted to write was “In case you’re looking for a foreign love…”. It was *very* awkward!

]]>I second François-Xavier’s suggestion, Dan!

]]>It would definitely be great if you wrote a short 4p mini-paper version of your opinion on this and submitted it to the workshop ;).

]]>Noninformative priors do not exist full stop. For normal means or for functions. Obviously, the larger the space, the more concentrated the prior. And hence the more “informative”. Btw, you mean “for a living”, right?!

]]>That probably makes more sense if you replace “facsimile” with “simulacrum”. That’s what I get for being fancy…

]]>On the off chance that someone who doesn’t work with nonparametric models for a loving reads this, I feel I should be more expansive.

Noninformative priors on functions don’t exist.

This isn’t like Santa or a uniform prior on the real line. Neither of these strictly exist, but you can produce a decent facsimile of them.

Non-informative priors on function spaces don’t exist (except in the boring case where you only consider a finite dimensional function space)

So any assumption of a prior on a set of functions is EXTREMELY informative and not nearly like a smoothness assumption.

]]>FX – A gaussian process emulator is *NOT* the same thing as a smoothness assumption. It is A LOT more informative (especially if you are doing anything more than MAP estimation)

]]>Daniel, I can confirm that all of the existing probabilistic integration methods do make some kind of assumption on which space the integrand belongs to (usually in terms of RKHS). In the workshop description, this is referred to as “prior assumption” (in ML, people often look at this problem from a Bayesian perspective, by putting a Gaussian Process emulator on the integrand).

I think the point here is that people in ML tend to use Monte Carlo as a default tool without necessarily considering any information they have about the integrand. One of the main reasons for this is probably due to the fact that Monte Carlo methods have been studied for a much longer period of time and people therefore feel safer using them. However one could often use more elaborate tools which include the prior information. This workshop therefore aims at discussing how to develop these methods and how much one could gain from using the additional knowledge.

I am looking forward to your highly critical talk, Christian!

FX

]]>