Archive for the Books Category

a pile of new books

Posted in Books, Travel, University life with tags , , , , , , , , , on November 22, 2014 by xi'an

IMG_2663I took the opportunity of my weekend trip to Gainesville to order a pile of books on amazon, thanks to my amazon associate account (and hence thanks to all Og’s readers doubling as amazon customers!). The picture above is missing two  Rivers of London volumes by Ben Aaraonovitch that I already read and left at the office. And reviewed in incoming posts. Among those,

(Obviously, all “locals” sharing my taste in books are welcome to borrow those in a very near future!)

some LaTeX tricks

Posted in Books, Kids, Statistics, University life with tags , , , , on November 21, 2014 by xi'an

Here are a few LaTeX tricks I learned or rediscovered when working on several papers the past week:

  1. I am always forgetting how to make aligned equations with a single equation number, so I found this solution on the TeX forum of stackexchange, Namely use the equation environment and then an aligned environment inside. Or the split environment. But it does not always work…
  2. Another frustrating black hole is how to deal with integral signs that do not adapt to the integrand. Too bad we cannot use \left\int, really! Another stackexchange question led me to the bigints package. Not perfect though.
  3. Pierre Pudlo also showed me the commands \graphicspath{{dir1}{dir2}} and \DeclareGraphicsExtensions{.pdf,.png,.jpg} to avoid coding the entire path to each image and to put an order on the extension type, respectively. The second one is fairly handy when working on drafts. The first one does not seem to work with symbolic links, though…

not converging to London for an [extra]ordinary Read Paper

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , on November 21, 2014 by xi'an

London by Delta, Dec. 14, 2011On December 10, I will alas not travel to London to attend the Read Paper on sequential quasi-Monte Carlo presented by Mathieu Gerber and Nicolas Chopin to The Society, as I fly instead to Montréal for the NIPS workshops… I am quite sorry to miss this event, as this is a major paper which brings quasi-Monte Carlo methods into mainstream statistics. I will most certainly write a discussion and remind Og’s readers that contributed (800 words) discussions are welcome from everyone, the deadline for submission being January 02.

differences between Bayes factors and normalised maximum likelihood

Posted in Books, Kids, Statistics, University life with tags , , , , on November 19, 2014 by xi'an

A recent arXival by Heck, Wagenmaker and Morey attracted my attention: Three Qualitative Differences Between Bayes Factors and Normalized Maximum Likelihood, as it provides an analysis of the differences between Bayesian analysis and Rissanen’s Optimal Estimation of Parameters that I reviewed a while ago. As detailed in this review, I had difficulties with considering the normalised likelihood

p(x|\hat\theta_x) \big/ \int_\mathcal{X} p(y|\hat\theta_y)\,\text{d}y

as the relevant quantity. One reason being that the distribution does not make experimental sense: for instance, how can one simulate from this distribution? [I mean, when considering only the original distribution.] Working with the simple binomial B(n,θ) model, the authors show the quantity corresponding to the posterior probability may be constant for most of the data values, produces a different upper bound and hence a different penalty of model complexity, and may differ in conclusion for some observations. Which means that the apparent proximity to using a Jeffreys prior and Rissanen’s alternative does not go all the way. While it is a short note and only focussed on producing an illustration in the Binomial case, I find it interesting that researchers investigate the Bayesian nature (vs. artifice!) of this approach…

a probabilistic proof to a quasi-Monte Carlo lemma

Posted in Books, Statistics, Travel, University life with tags , , , , , on November 17, 2014 by xi'an

As I was reading in the Paris métro a new textbook on Quasi-Monte Carlo methods, Introduction to Quasi-Monte Carlo Integration and Applications, written by Gunther Leobacher and Friedrich Pillichshammer, I came upon the lemma that, given two sequences on (0,1) such that, for all i’s,

|u_i-v_i|\le\delta\quad\text{then}\quad\left|\prod_{i=1}^s u_i-\prod_{i=1}^s v_i\right|\le 1-(1-\delta)^s

and the geometric bound made me wonder if there was an easy probabilistic proof to this inequality. Rather than the algebraic proof contained in the book. Unsurprisingly, there is one based on associating with each pair (u,v) a pair of independent events (A,B) such that, for all i’s,

A_i\subset B_i\,,\ u_i=\mathbb{P}(A_i)\,,\ v_i=\mathbb{P}(B_i)

and representing

\left|\prod_{i=1}^s u_i-\prod_{i=1}^s v_i\right| = \mathbb{P}(\cap_{i=1}^s A_i) - \mathbb{P}(\cap_{i=1}^s B_i)\,.

Obviously, there is no visible consequence to this remark, but it was a good way to switch off the métro hassle for a while! (The book is under review and the review will hopefully be posted on the ‘Og as soon as it is completed.)

Le Monde puzzle [#887]

Posted in Books, Kids, R, Statistics with tags , , , on November 15, 2014 by xi'an

A simple combinatorics Le Monde mathematical puzzle:

N is a golden number if the sequence {1,2,…,N} can be reordered so that the sum of any consecutive pair is a perfect square. What are the golden numbers between 1 and 25?

Indeed, from an R programming point of view, all I have to do is to go over all possible permutations of {1,2,..,N} until one works or until I have exhausted all possible permutations for a given N. However, 25!=10²⁵ is a wee bit too large… Instead, I resorted once again to brute force simulation, by first introducing possible neighbours of the integers

  perfect=(1:trunc(sqrt(2*N)))^2
  friends=NULL
  le=1:N
  for (perm in 1:N){
    amis=perfect[perfect>perm]-perm
    amis=amis[amis<N]
    le[perm]=length(amis)
    friends=c(friends,list(amis))
    }

and then proceeding to construct the permutation one integer at time by picking from its remaining potential neighbours until there is none left or the sequence is complete

orderin=0*(1:N)
t=1
orderin[1]=sample((1:N),1)
for (perm in 1:N){
  friends[[perm]]=friends[[perm]]
              [friends[[perm]]!=orderin[1]]
  le[perm]=length(friends[[perm]])
  }
while (t<N){
  if (length(friends[[orderin[t]]])==0)
        break()
  if (length(friends[[orderin[t]]])>1){
    orderin[t+1]=sample(friends[[orderin[t]]],1)}else{
    orderin[t+1]=friends[[orderin[t]]]
    }
  for (perm in 1:N){
    friends[[perm]]=friends[[perm]]
      [friends[[perm]]!=orderin[t+1]]
    le[perm]=length(friends[[perm]])
    }
  t=t+1}

and then repeating this attempt until a full sequence is produced or a certain number of failed attempts has been reached. I gained in efficiency by proposing a second completion on the left of the first integer once a break occurs:

while (t<N){
  if (length(friends[[orderin[1]]])==0)
        break()
  orderin[2:(t+1)]=orderin[1:t]
  if (length(friends[[orderin[2]]])>1){
    orderin[1]=sample(friends[[orderin[2]]],1)}else{
    orderin[1]=friends[[orderin[2]]]
    }
  for (perm in 1:N){
    friends[[perm]]=friends[[perm]]
       [friends[[perm]]!=orderin[1]]
    le[perm]=length(friends[[perm]])
    }
  t=t+1}

(An alternative would have been to complete left and right by squared numbers taken at random…) The result of running this program showed there exist permutations with the above property for N=15,16,17,23,25,26,…,77.  Here is the solution for N=49:

25 39 10 26 38 43 21 4 32 49 15 34 30 6 3 22 42 7 9 27 37 12 13 23 41 40 24 1 8 28 36 45 19 17 47 2 14 11 5 44 20 29 35 46 18 31 33 16 48

As an aside, the authors of Le Monde puzzle pretended (in Tuesday, Nov. 12, edition) that there was no solution for N=23, while this sequence

22 3 1 8 17 19 6 10 15 21 4 12 13 23 2 14 11 5 20 16 9 7 18

sounds fine enough to me… I more generally wonder at the general principle behind the existence of such sequences. It sounds quite probable that they exist for N>24. (The published solution does not bring any light on this issue, so I assume the authors have no mathematical analysis to provide.)

about the strong likelihood principle

Posted in Books, Statistics, University life with tags , , , , , , , on November 13, 2014 by xi'an

Deborah Mayo arXived a Statistical Science paper a few days ago, along with discussions by Jan Bjørnstad, Phil Dawid, Don Fraser, Michael Evans, Jan Hanning, R. Martin and C. Liu. I am very glad that this discussion paper came out and that it came out in Statistical Science, although I am rather surprised to find no discussion by Jim Berger or Robert Wolpert, and even though I still cannot entirely follow the deductive argument in the rejection of Birnbaum’s proof, just as in the earlier version in Error & Inference.  But I somehow do not feel like going again into a new debate about this critique of Birnbaum’s derivation. (Even though statements like the fact that the SLP “would preclude the use of sampling distributions” (p.227) would call for contradiction.)

“It is the imprecision in Birnbaum’s formulation that leads to a faulty impression of exactly what  is proved.” M. Evans

Indeed, at this stage, I fear that [for me] a more relevant issue is whether or not the debate does matter… At a logical cum foundational [and maybe cum historical] level, it makes perfect sense to uncover if and which if any of the myriad of Birnbaum’s likelihood Principles holds. [Although trying to uncover Birnbaum's motives and positions over time may not be so relevant.] I think the paper and the discussions acknowledge that some version of the weak conditionality Principle does not imply some version of the strong likelihood Principle. With other logical implications remaining true. At a methodological level, I am less much less sure it matters. Each time I taught this notion, I got blank stares and incomprehension from my students, to the point I have now stopped altogether teaching the likelihood Principle in class. And most of my co-authors do not seem to care very much about it. At a purely mathematical level, I wonder if there even is ground for a debate since the notions involved can be defined in various imprecise ways, as pointed out by Michael Evans above and in his discussion. At a statistical level, sufficiency eventually is a strange notion in that it seems to make plenty of sense until one realises there is no interesting sufficiency outside exponential families. Just as there are very few parameter transforms for which unbiased estimators can be found. So I also spend very little time teaching and even less worrying about sufficiency. (As it happens, I taught the notion this morning!) At another and presumably more significant statistical level, what matters is information, e.g., conditioning means adding information (i.e., about which experiment has been used). While complex settings may prohibit the use of the entire information provided by the data, at a formal level there is no argument for not using the entire information, i.e. conditioning upon the entire data. (At a computational level, this is no longer true, witness ABC and similar limited information techniques. By the way, ABC demonstrates if needed why sampling distributions matter so much to Bayesian analysis.)

“Non-subjective Bayesians who (…) have to live with some violations of the likelihood principle (…) since their prior probability distributions are influenced by the sampling distribution.” D. Mayo (p.229)

In the end, the fact that the prior may depend on the form of the sampling distribution and hence does violate the likelihood Principle does not worry me so much. In most models I consider, the parameters are endogenous to those sampling distributions and do not live an ethereal existence independently from the model: they are substantiated and calibrated by the model itself, which makes the discussion about the LP rather vacuous. See, e.g., the coefficients of a linear model. In complex models, or in large datasets, it is even impossible to handle the whole data or the whole model and proxies have to be used instead, making worries about the structure of the (original) likelihood vacuous. I think we have now reached a stage of statistical inference where models are no longer accepted as ideal truth and where approximation is the hard reality, imposed by the massive amounts of data relentlessly calling for immediate processing. Hence, where the self-validation or invalidation of such approximations in terms of predictive performances is the relevant issue. Provided we can at all face the challenge…

Follow

Get every new post delivered to your Inbox.

Join 701 other followers