**B**eyond mentions in the general press of the retire significance paper, as in Retraction Watch, Bloomberg, The Guardian, Vox, and NPR, not to mention the large number of comments on Andrew’s blog, and Deborah Mayo’s tribune on a ban on free speech (!), Nature of “the week after” contained three letters from Ioannidis, calling for more stringent thresholds, Johnson, essentially if unclearly stating the same, and my friends from Amsterdam, Alexander Ly and E.J. Wagenmakers, along with Julia Haaf, getting back to the Great Old Ones, to defend the usefulness of testing versus estimation.

## Archive for University of Amsterdam

## aftermaths of retiring significance

Posted in Books, pictures, Statistics, University life with tags Andrew Gelman, Nature, NPR, statistical significance, The Guardian, the week after, University of Amsterdam on April 10, 2019 by xi'an## Dutch summer workshops on Bayesian modeling

Posted in Books, pictures, Statistics, Travel, University life with tags Amsterdam, JAGS, JASP, statistical software, summer school, University of Amsterdam, WinBUGS on March 21, 2019 by xi'anJust received an email about two Bayesian workshops in Amsterdam this summer:

- “Theory and Practice of Bayesian Hypothesis Testing, A JASP Workshop” August 22 – August 23, 2019
- “Bayesian Modeling for Cognitive Science, A JAGS and WinBUGS Workshop” August 26 – August 30

both taking place at the University of Amsterdam. And focussed on Bayesian software.

## are there a frequentist and a Bayesian likelihoods?

Posted in Statistics with tags Bayes factor, Bayes formula, cross validated, dominating measure, Harold Jeffreys, likelihood function, Metron, probability theory, R.A. Fisher, University of Amsterdam, wikipedia on June 7, 2018 by xi'an**A** question that came up on X validated and led me to spot rather poor entries in Wikipedia about both the likelihood function and Bayes’ Theorem. Where unnecessary and confusing distinctions are made between the frequentist and Bayesian versions of these notions. I have already discussed the later (Bayes’ theorem) a fair amount here. The discussion about the likelihood is quite bemusing, in that the likelihood function is the … function of the parameter equal to the density indexed by this parameter at the observed value.

“What we can find from a sample is the likelihood of any particular value of r, if we define the likelihood as a quantity proportional to the probability that, from a population having the particular value of r, a sample having the observed value of r, should be obtained.”R.A. Fisher,On the “probable error’’ of a coefficient of correlation deduced from a small sample.Metron1, 1921, p.24

By mentioning an informal side to likelihood (rather than to likelihood function), and then stating that the likelihood is not a probability in the frequentist version but a probability in the Bayesian version, the W page makes a complete and unnecessary mess. Whoever is ready to rewrite this introduction is more than welcome! (Which reminded me of an earlier question also on X validated asking why a common reference measure was needed to define a likelihood function.)

This also led me to read a recent paper by Alexander Etz, whom I met at E.J. Wagenmakers‘ lab in Amsterdam a few years ago. Following Fisher, as Jeffreys complained about

“..likelihood, a convenient term introduced by Professor R.A. Fisher, though in his usage it is sometimes multiplied by a constant factor. This is the probability of the observations given the original information and the hypothesis under discussion.”H. Jeffreys,Theory of Probability, 1939, p.28

Alexander defines the likelihood up to a constant, which causes extra-confusion, for free!, as there is no foundational reason to introduce this degree of freedom rather than imposing an exact equality with the density of the data (albeit with an arbitrary choice of dominating measure, never neglect the dominating measure!). The paper also repeats the message that the likelihood is not a probability (density, *missing in the paper*). And provides intuitions about maximum likelihood, likelihood ratio and Wald tests. But does not venture into a separate definition of the likelihood, being satisfied with the fundamental notion to be plugged into the magical formula

posterior∝prior×likelihood

## JASP, a really really fresh way to do stats

Posted in Statistics with tags Bayes factors, Bayesian inference, design, Harold Jeffreys, JASP, tee-shirt, University of Amsterdam on February 1, 2018 by xi'an## absolutely no Bayesians inside!

Posted in Statistics with tags Amsterdam, cartoon, English grammar, JASP, statistical software, sticker, Trojan horse, University of Amsterdam, Viktor Breekman on December 11, 2017 by xi'an## bridgesampling [R package]

Posted in pictures, R, Statistics, University life with tags Amsterdam, bridge, bridge sampling, bridgesampling, JAGS, R, R package, STAN, University of Amsterdam, warped bridge sampling on November 9, 2017 by xi'an**Q**uentin F. Gronau, Henrik Singmann and Eric-Jan Wagenmakers have arXived a detailed documentation about their * bridgesampling* R package. (No wonder that researchers from Amsterdam favour bridge sampling!)

*[The package relates to a [52 pages] tutorial on bridge sampling by Gronau et al. that I will hopefully comment soon.]*The bridge sampling methodology for marginal likelihood approximation requires

*two*Monte Carlo samples for a ratio of

*two*integrals. A nice twist in this approach is to use a dummy integral that is already available, with respect to a probability density that is an approximation to the exact posterior. This means avoiding the difficulties with bridge sampling of bridging two different parameter spaces, in possibly different dimensions, with potentially very little overlap between the posterior distributions. The substitute probability density is chosen as Normal or warped Normal, rather than a t which would provide more stability in my opinion. The

*package also provides an error evaluation for the approximation, although based on spectral estimates derived from the*

**bridgesampling****package. The remainder of the document exhibits how the package can be used in conjunction with either JAGS or Stan. And concludes with the following words of caution:**

*coda*

“It should also be kept in mind that there may be cases in which the bridge sampling procedure may not be the ideal choice for conducting Bayesian model comparisons. For instance, when the models are nested it might be faster and easier to use the Savage-Dickey density ratio (Dickey and Lientz 1970; Wagenmakers et al. 2010). Another example is when the comparison of interest concerns a very large model space, and a separate bridge sampling based computation of marginal likelihoods may take too much time. In this scenario, Reversible Jump MCMC (Green 1995) may be more appropriate.”