Archive for Fourier transform

principles of uncertainty (second edition)

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , on July 21, 2020 by xi'an

A new edition of Principles of Uncertainty is about to appear. I was asked by CRC Press to review the new book and here are some (raw) extracts from my review. (Some comments may not apply to the final and published version, mind.)

In Chapter 6, the proof of the Central Limit Theorem utilises the “smudge” technique, which is to add an independent noise to both the sequence of rvs and its limit. This is most effective and reminds me of quite a similar proof Jacques Neveu used in its probability notes in Polytechnique. Which went under the more formal denomination of convolution, with the same (commendable) purpose of avoiding Fourier transforms. If anything, I would have favoured a slightly more condensed presentation in less than 8 pages. Is Corollary 6.5.8 useful or even correct??? I do not think so because the non-centred average rescaled by √n diverges almost surely. For the same reason, I object to the very first sentence of Section 6.5 (p.246)

In Chapter 7, I found a nice mention of (Hermann) Rubin’s insistence on not separating probability and utility as only the product matters. And another fascinating quote from Keynes, not from his early statistician’s years, but in 1937 as an established economist

“The sense in which I am using the term uncertain is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to the summed.”

(is the last sentence correct? I would have expected, pardon my French!, “to be summed”). Further interesting trivia on the criticisms of utility theory, including de Finetti’s role and his own lack of connection with subjective probability principles.

In Chapter 8, a major remark (iii) is found p.293 about the fact that a conjugate family requires a dominating measure (although this is expressed differently since the book shies away from introducing measure theory, ) reminds me of a conversation I had with Jay when I visited Carnegie Mellon in 2013 (?). Which exposes the futility of seeing conjugate priors as default priors. It is somewhat surprising that a notion like admissibility appears as a side quantity when discussing Stein’s paradox in 8.2.1 [and then later in Section 9.1.3] while it seems to me to be central to Bayesian decision theory, much more than the epiphenomenon that Stein’s paradox represents in the big picture. But the book dismisses minimaxity even faster in Section 9.1.4:

As many who suffer from paranoia have discovered, one can always dream-up an even worse possibility to guard against. Thus, the minimax framework is unstable. (p.336)

Interesting introduction of the Wishart distribution to kindly handle random matrices and matrix Jacobians, with the original space being the p(p+1)/2 real space (implicitly endowed with the Lebesgue measure). Rather than a more structured matricial space. A font error makes Corollary 8.7.2 abort abruptly. The space of positive definite matrices is mentioned in Section8.7.5 but still (implicitly) corresponds to the common p(p+1)/2 real Euclidean space. Another typo in Theorem 8.9.2 with a Frenchised version of Dirichlet, Dirichelet. Followed by a Dirchlet at the end of the proof (p.322). Again and again on p.324 and on following pages. I would object to the singular in the title of Section 8.10 as there are exponential families rather than a single one. With no mention made of Pitman-Koopman lemma and its consequences, namely that the existence of conjugacy remains an epiphenomenon. Hence making the amount of pages dedicated to gamma, Dirichlet and Wishart distributions somewhat excessive.

In Chapter 9, I noticed (p.334) a Scheffe that should be Scheffé (and again at least on p.444). (I love it that Jay also uses my favorite admissible (non-)estimator, namely the constant value estimator with value 3.) I wonder at the worth of a ten line section like 9.3, when there are delicate issues in handling meta-analysis, even in a Bayesian mood (or mode). In the model uncertainty section, Jay discuss the (im)pertinence of both selecting one of the models and setting independent priors on their respective parameters, with which I disagree on both levels. Although this is followed by a more reasonable (!) perspective on utility. Nice to see a section on causation, although I would have welcomed an insert on the recent and somewhat outrageous stand of Pearl (and MacKenzie) on statisticians missing the point on causation and counterfactuals by miles. Nonparametric Bayes is a new section, inspired from Ghahramani (2005). But while it mentions Gaussian and Dirichlet [invariably misspelled!] processes, I fear it comes short from enticing the reader to truly grasp the meaning of a prior on functions. Besides mentioning it exists, I am unsure of the utility of this section. This is one of the rare instances where measure theory is discussed, only to state this is beyond the scope of the book (p.349).

ABC with kernelised regression

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on February 22, 2017 by xi'an

sunset from the Banff Centre, Banff, Canada, March 21, 2012The exact title of the paper by Jovana Metrovic, Dino Sejdinovic, and Yee Whye Teh is DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. It appeared last year in the proceedings of ICML.  The idea is to build ABC summaries by way of reproducing kernel Hilbert spaces (RKHS). Regressing such embeddings to the “optimal” choice of summary statistics by kernel ridge regression. With a possibility to derive summary statistics for quantities of interest rather than for the entire parameter vector. The use of RKHS reminds me of Arthur Gretton’s approach to ABC, although I see no mention made of that work in the current paper.

In the RKHS pseudo-linear formulation, the prediction of a parameter value given a sample attached to this value looks like a ridge estimator in classical linear estimation. (I thus wonder at why one would stop at the ridge stage instead of getting the full Bayes treatment!) Things get a bit more involved in the case of parameters (and observations) of interest, as the modelling requires two RKHS, because of the conditioning on the nuisance observations. Or rather three RHKS. Since those involve a maximum mean discrepancy between probability distributions, which define in turn a sort of intrinsic norm, I also wonder at a Wasserstein version of this approach.

What I find hard to understand in the paper is how a large-dimension large-size sample can be managed by such methods with no visible loss of information and no explosion of the computing budget. The authors mention Fourier features, which never rings a bell for me, but I wonder how this operates in a general setting, i.e., outside the iid case. The examples do not seem to go into enough details for me to understand how this massive dimension reduction operates (and they remain at a moderate level in terms of numbers of parameters). I was hoping Jovana Mitrovic could present her work here at the 17w5025 workshop but she sadly could not make it to Banff for lack of funding!

Gauss to Laplace transmutation!

Posted in Books, Kids, Statistics, University life with tags , , , , on October 14, 2015 by xi'an

When browsing X validated the other day [translate by procrastinating!], I came upon the strange property that the marginal distribution of a zero mean normal variate with exponential variance is a Laplace distribution. I first thought there was a mistake since we usually take an inverse Gamma on the variance parameter, not a Gamma. But then the marginal is a t distribution. The result is curious and can be expressed in a variety of ways:

– the product of a χ21 and of a χ2 is a χ22;
– the determinant of a 2×2 normal matrix is a Laplace variate;
– a difference of exponentials is Laplace…

The OP was asking for a direct proof of the result and I eventually sorted it out by a series of changes of variables, although there exists a much more elegant and general proof by Mike West, then at the University of Warwick, based on characteristic functions (or Fourier transforms). It reminded me that continuous, unimodal [at zero] and symmetric densities were necessary scale mixtures [a wee misnomer] of Gaussians. Mike proves in this paper that exponential power densities [including both the Normal and the Laplace cases] correspond to the variances having an inverse positive stable distribution with half the power. And this is a straightforward consequence of the exponential power density being proportional to the Fourier transform of a stable distribution and of a Fubini inversion. (Incidentally, the processing times of Biometrika were not that impressive at the time, with a 2-page paper submitted in Dec. 1984 published in Sept. 1987!)

This is a very nice and general derivation, but I still miss the intuition as to why it happens that way. But then, I know nothing, and even less about products of random variates!

MCqMC 2014 [day #4]

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , on April 11, 2014 by xi'an

Leuven7

I hesitated in changing the above title for “MCqMSmaug” as the plenary talk I attended this morning was given by Wenzel Jakob, who uses Markov chain Monte Carlo methods in image rendering and light simulation. The talk was low-tech’, with plenty of pictures and animations (incl. excerpts from recent blockbusters!), but it stressed how much proper rending relies on powerful MCMC techniques. One point particularly attracted my attention, namely the notion of manifold exploration as it seemed related to my zero measure recent post. (A related video is available on Jakob’s webpage.) You may then wonder where the connection with Smaug could be found: Wenzel Jakob is listed in the credits of both Hobbit movies for his contributions to the visual effects! (Hey, MCMC made Smaug [visual effects the way they are], a cool argument for selling your next MCMC course! I will for sure include a picture of Smaug in my next R class presentation…) The next sessions of the morning opposed Sobol’s memorial to more technical light rendering and I chose Sobol, esp. because I had missed Art Owen’s tutorial on Sunday, as he gave a short presentation on using Sobol’s criteria to identify variables contributing the most to the variability or extreme values of a function, an extreme value kind of ANOVA, most interesting if far from my simulation area… The afternoon sessions saw MCMC talks by Luke Bornn and Scott Schmidler, both having connection with the Wang-Landau algorithm. Actually, Scott’s talk was the one generating the most animated discussion among all those I attended in MCqMC! (To the point of the chairman getting rather rudely making faces…)