## scalable Langevin exact algorithm [Read Paper]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on June 23, 2020 by xi'an

Murray Pollock, Paul Fearnhead, Adam M. Johansen and Gareth O. Roberts (CoI: all with whom I have strong professional and personal connections!) have a Read Paper discussion happening tomorrow [under relaxed lockdown conditions in the UK, except for the absurd quatorzine on all travelers|, but still in a virtual format] that we discussed together [from our respective homes] at Paris Dauphine. And which I already discussed on this blog when it first came out.

Here are quotes I spotted during this virtual Dauphine discussion but we did not come up with enough material to build a significant discussion, although wondering at the potential for solving the O(n) bottleneck, handling doubly intractable cases like the Ising model. And noticing the nice features of the log target being estimable by unbiased estimators. And of using control variates, for once well-justified in a non-trivial environment.

“However, in practice this simple idea is unlikely to work. We can see this most clearly with the rejection sampler, as the probability of survival will decrease exponentially with t—and thus the rejection probability will often be prohibitively large.”

“This can be viewed as a rejection sampler to simulate from μ(x,t), the distribution of the Brownian motion at time  t conditional on its surviving to time t. Any realization that has been killed is ‘rejected’ and a realization that is not killed is a draw from μ(x,t). It is easy to construct an importance sampling version of this rejection sampler.”

## Bolztmann optimisation as simulating device

Posted in Books, Statistics, University life with tags , , , , , , on June 18, 2020 by xi'an

“The problem of drawing samples from a discrete distribution can be converted into a discrete optimization problem” Maddison et al., 2014

I recently learned about the Gumbel-Max “trick” proposed by Chris Maddison, Daniel Tarlow, and Tom Minka in a 2014 NIPS talk. Namely that, to generate from a Boltzmann distribution

$p_j=\frac{\exp\{g_j\}}{\sum_i \exp\{g_i\}}$

is equivalent to adding standard Gumbel noise to the energies and taking the maximum. A rare (?) instance, compared with the reverse of using simulation to reach maxima. Of course, this requires as many simulations as there as terms in the sum. Or a clever way to avoid this exhaustive listing.

“According to Gumbel’s statistics, 326 out of 354 political murders by right-wing factions in the early Weimar Republic went unpunished, and four out of the 22 left-wing capital crimes.” Science News

As an historical aside I discovered Gumbel’s anti-Nazi activism while in Germany in the 1920’s and early 1930’s (until expelled from the University of Heidelberg). Including the 1932 call against Nazis (which Albert Einstein and Heinrich Mann also signed), hence the above poster.

## severe testing : beyond Statistics wars?!

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on January 7, 2019 by xi'an

A timely start to my reading Deborah Mayo’s [properly printed] Statistical Inference as Severe Testing (How to get beyond the Statistics Wars) on the Armistice Day, as it seems to call for just this, an armistice! And the opportunity of a long flight to Oaxaca in addition… However, this was only the start and it took me several further weeks to peruse seriously enough the book (SIST) before writing the (light) comments below. (Receiving a free copy from CUP and then a second one directly from Deborah after I mentioned the severe sabotage!)

Indeed, I sort of expected a different content when taking the subtitle How to get beyond the Statistics Wars at face value. But on the opposite the book is actually very severely attacking anything not in the line of the Cox-Mayo severe testing line. Mostly Bayesian approach(es) to the issue! For instance, Jim Berger’s construct of his reconciliation between Fisher, Neyman, and Jeffreys is surgically deconstructed over five pages and exposed as a Bayesian ploy. Similarly, the warnings from Dennis Lindley and other Bayesians that the p-value attached with the Higgs boson experiment are not probabilities that the particle does not exist are met with ridicule. (Another go at Jim’s Objective Bayes credentials is found in the squared myth of objectivity chapter. Maybe more strongly than against staunch subjectivists like Jay Kadane. And yet another go when criticising the Berger and Sellke 1987 lower bound results. Which even extends to Vale Johnson’s UMP-type Bayesian tests.)

“Inference should provide posterior probabilities, final degrees of support, belief, probability (…) not provided by Bayes factors.” (p.443)

Another subtitle of the book could have been testing in Flatland given the limited scope of the models considered with one or at best two parameters and almost always a Normal setting. I have no idea whatsoever how the severity principle would apply in more complex models, with e.g. numerous nuisance parameters. By sticking to the simplest possible models, the book can carry on with the optimality concepts of the early days, like sufficiency (p.147) and and monotonicity and uniformly most powerful procedures, which only make sense in a tiny universe.

“The estimate is really a hypothesis about the value of the parameter.  The same data warrant the hypothesis constructed!” (p.92)

There is an entire section on the lack of difference between confidence intervals and the dual acceptance regions, although the lack of unicity in defining either of them should come as a bother. Especially outside Flatland. Actually the following section, from p.193 onward, reminds me of fiducial arguments, the more because Schweder and Hjort are cited there. (With a curve like Fig. 3.3. operating like a cdf on the parameter μ but no dominating measure!)

“The Fisher-Neyman dispute is pathological: there’s no disinterring the truth of the matter (…) Fisher grew to renounce performance goals he himself had held when it was found that fiducial solutions disagreed with them.”(p.390)

Similarly the chapter on the “myth of the “the myth of objectivity””(p.221) is mostly and predictably targeting Bayesian arguments. The dismissal of Frank Lad’s arguments for subjectivity ends up [or down] with a rather cheap that it “may actually reflect their inability to do the math” (p.228). [CoI: I once enjoyed a fantastic dinner cooked by Frank in Christchurch!] And the dismissal of loss function requirements in Ziliak and McCloskey is similarly terse, if reminding me of Aris Spanos’ own arguments against decision theory. (And the arguments about the Jeffreys-Lindley paradox as well.)

“It’s not clear how much of the current Bayesian revolution is obviously Bayesian.” (p.405)

The section (Tour IV) on model uncertainty (or against “all models are wrong”) is somewhat limited in that it is unclear what constitutes an adequate (if wrong) model. And calling for the CLT cavalry as backup (p.299) is not particularly convincing.

It is not that everything is controversial in SIST (!) and I found agreement in many (isolated) statements. Especially in the early chapters. Another interesting point made in the book is to question whether or not the likelihood principle at all makes sense within a testing setting. When two models (rather than a point null hypothesis) are X-examined, it is a rare occurrence that the likelihood factorises any further than the invariance by permutation of iid observations. Which reminded me of our earlier warning on the dangers of running ABC for model choice based on (model specific) sufficient statistics. Plus a nice sprinkling of historical anecdotes, esp. about Neyman’s life, from Poland, to Britain, to California, with some time in Paris to attend Borel’s and Lebesgue’s lectures. Which is used as a background for a play involving Bertrand, Borel, Neyman and (Egon) Pearson. Under the title “Les Miserables Citations” [pardon my French but it should be Les Misérables if Hugo is involved! Or maybe les gilets jaunes…] I also enjoyed the sections on reuniting Neyman-Pearson with Fisher, while appreciating that Deborah Mayo wants to stay away from the “minefields” of fiducial inference. With, mot interestingly, Neyman himself trying in 1956 to convince Fisher of the fallacy of the duality between frequentist and fiducial statements (p.390). Wisely quoting Nancy Reid at BFF4 stating the unclear state of affair on confidence distributions. And the final pages reawakened an impression I had at an earlier stage of the book, namely that the ABC interpretation on Bayesian inference in Rubin (1984) could come closer to Deborah Mayo’s quest for comparative inference (p.441) than she thinks, in that producing parameters producing pseudo-observations agreeing with the actual observations is an “ability to test accordance with a single model or hypothesis”.

“Although most Bayesians these days disavow classic subjective Bayesian foundations, even the most hard-nosed. “we’re not squishy” Bayesian retain the view that a prior distribution is an important if not the best way to bring in background information.” (p.413)

A special mention to Einstein’s cafe (p.156), which reminded me of this picture of Einstein’s relative Cafe I took while staying in Melbourne in 2016… (Not to be confused with the Markov bar in the same city.) And a fairly minor concern that I find myself quoted in the sections priors: a gallimaufry (!) and… Bad faith Bayesianism (!!), with the above qualification. Although I later reappear as a pragmatic Bayesian (p.428), although a priori as a counter-example!

## the explanation why Science gets underfunded

Posted in Statistics with tags , , , , on May 8, 2017 by xi'an

## Correlations between the physical and social sciences

Posted in Books, Statistics, University life with tags , , , , , , , on January 18, 2012 by xi'an

This is probably the most bizarre book I have received for review (so far).  Its title is wide-ranging: Correlations Between the Physical and Social Sciences.  Its cover is enticing: a picture of the young Albert Einstein. Its purpose is wide:

The thesis of this monograph is that societies in general are governed by objective laws that have their roots in human nature. The task of the social scientist is to discover and explore those laws (…) Null hypotheses and alternative rival hypotheses developed by social scientists must eclectically correlated to mathematical formulae or the laws of physics in order to advance non-speculative, unbiased knowledge.” V.J. Belfiglio (p.x)

So the thesis advanced in Correlations Between the Physical and Social Sciences by Valentine Belfiglio is that social problems can be represented in terms of physical laws. The 41 pages book pushes this argument through four cases studies.

The first case study relates marital assimilation of minority groups into dominate core cultures with Graham’s Law for the diffusion of gases. The second case study relates the mutual hostility of political leaders with the Mirror Equation employed in basic geometric optics. The third case study relates the duration of major American military conflicts to the formulae for empirical and subjective probabilities. The fourth case study relates the radioactive decay formula for radioactive substances to the rate of decline of several extinct empires” V.J. Belfiglio (p.xi)

As the author himself recognises, “the four case studies in this monograph do not provide definitive answers.” My opinion is that they do not provide answers at all! Indeed, the first chapter contains two 2×2 tables about the endogamous preferences of Mexican and Italian inhabitants of Dallas, Texas. A chi-square test concludes that Mexicans prefer endogamy and that Italians do not. Although Graham’s Law is re-expressed there as “marital assimilation being inversely proportional to the square root of the population densities” (p.3), there is no result based on the data supporting this law. The second chapter is trying to “explore the mutuality of hostility between the Bush and Ahmandinejad (sic) administrations. Spearman’s Rho correlation coefficient” (p.11) is used and found to demonstrate “a perfect positive correlation” (p.12), although the data is quantitative (intensity of hostility between 1 and 9) and not paired. (The study simply shows that the empirical cdfs of the hostility values for both sides are approximately the same, Spearman’s rho test being inappropriate there.) The connection with optics is at best tenuous. Chapter 3 centres on a table for the durations of major American (meaning US) military conflicts. A mere observation is that the US “has been engaged in major wars 56.5 percent of the time between 1775-2010.” (p.24) but Valentine Belfiglio turns this into “empirical probability” (i.e the frequency of wars), a “subjective probability” (i.e. the average number of years of peace between wars), and the “number of possible interaction channels” (i.e. a combination number) as a way to link American foreign policy with probability theory. Again, the connection is non-existent. The fourth and final chapter is about the “correlation between the decay of radioactive substances and the rate of decline of empires.” (p.31) The data is made of the duration of seven empires, associated with estimates of their half-life. The paper concludes on “a perfect negative correlation between the half-lives of empires and their rates of decline” (p.35), which is not very surprising when considering that one is a monotonic function of the other…

I conclude with the words of Henry Wadsworth Longfellow: “Sometimes we may learn more from a man’s errors, than from his virtues”.” V.J. Belfiglio (p.40)

There is therefore not much to discuss about this book: it does not go beyond stating the obvious, while the connection between the observed social phenomena and generic physical laws remains at the level of a literary ellipse, not of a scientific demonstration. I am deeply puzzled at why a publisher would want to publish this… Any review of the material should have shown the author was out of his depth—his speciality at Texas Woman’s University is Government—in this particular endeavour of proving that “mathematical formulae and the law of physics can take scholars further in deriving conclusions from sets of assumptions than can inferential statistics” (back-cover).