likelihood inference with no MLE

Posted in Books, R, Statistics with tags , , , , on July 29, 2021 by xi'an

“In a regular full discrete exponential family, the MLE for the canonical parameter does not exist when the observed value of the canonical statistic lies on the boundary of its convex support.”

Daniel Eck and Charlie Geyer just published an interesting and intriguing paper on running efficient inference for discrete exponential families when the MLE does not exist.  As for instance in the case of a complete separation between 0’s and 1’s in a logistic regression model. Or more generally, when the estimated Fisher information matrix is singular. Not mentioning the Bayesian version, which remains a form of likelihood inference. The construction is based on a MLE that exists on an extended model, a notion which I had not heard previously. This model is defined as a limit of likelihood values

$\lim_{n\to\infty} \ell(\theta_n|x) = \sup_\theta \ell(\theta|x) := h(x)$

called the MLE distribution. Which remains a mystery to me, to some extent. Especially when this distribution is completely degenerate. Examples provided within the paper alas do not help, as they mostly serve as illustration for the associated rcdd R package. Intriguing, indeed!

Fermat’s Riddle

Posted in Books, Kids, R with tags , , , , , , , , , , on October 16, 2020 by xi'an

·A Fermat-like riddle from the Riddler (with enough room to code on the margin)

An  arbitrary positive integer N is to be written as a difference of two distinct positive integers. What are the impossible cases and else can you provide a list of all distinct representations?

Since the problem amounts to finding a>b>0 such that

$N=a^2-b^2=(a-b)(a+b)$

both (a+b) and (a-b) are products of some of the prime factors in the decomposition of N and both terms must have the same parity for the average a to be an integer. This eliminates decompositions with a single prime factor 2 (and N=1). For other cases, the following R code (which I could not deposit on tio.run because of the packages R.utils!) returns a list

library(R.utils)
library(numbers)
bitz<-function(i,m) #int2bits
c(rev(as.binary(i)),rep(0,m))[1:m]
ridl=function(n){
a=primeFactors(n)
if((n==1)|(sum(a==2)==1)){
print("impossible")}else{
m=length(a);g=NULL
for(i in 1:2^m){
b=bitz(i,m)
if(((d<-prod(a[!!b]))%%2==(e<-prod(a[!b]))%%2)&(d<e))
g=rbind(g,c(k<-(e+d)/2,l<-(e-d)/2))}
return(g[!duplicated(g[,1]-g[,2]),])}}


For instance,

> ridl(1456)
[,1] [,2]
[1,]  365  363
[2,]  184  180
[3,]   95   87
[4,]   59   45
[5,]   40   12
[6,]   41   15


Checking for the most prolific N, up to 10⁶, I found that N=6720=2⁶·3·5·7 produces 20 different decompositions. And that N=887,040=2⁸·3²·5·7·11 leads to 84 distinct differences of squares.

the limits of R

Posted in Books, pictures, R, Statistics with tags , , , , , , , , , , , , on August 10, 2020 by xi'an

It has been repeated many times on many platforms, the R (or R⁰) number is not a great summary about the COVID-19 pandemic, see eg Rossman’s warning in The Conversation, but Nature chose to stress it one more time (in its 16 Jul edition). Or twice when considering a similar piece in Nature Physics. As Boris Johnson made it a central tool of his governmental communication policy. And some mayors started asking for their own local R numbers! It is obviously tempting to turn the messy and complex reality of this planetary crisis into a single number and even a single indicator R<1, but it is unhelpful and worse, from the epidemiology models being wrong (or at least oversimplifying) to the data being wrong (i.e., incomplete, biased and late), to the predictions being wrong (except for predicting the past). Nothing outrageous from the said Nature article, pointing out diverse degrees of uncertainty and variability and stressing the need to immediately address clusters rather than using the dummy R. As an aside, the repeated use of nowcasting instead of forecasting sounds like a perfect journalist fad, given that it does not seem to be based on a different model of infection or on a different statistical technique. (There is a nowcasting package in R, though!) And a wee bit later I have been pointed out at an extended discussion of an R estimation paper on Radford Neal’s blog.

poor statistics

Posted in Books, pictures, R, Statistics, Travel, Wines with tags , , , , , , , , , , , , on September 24, 2019 by xi'an

I came over the weekend across this graph and the associated news that the county of Saint-Nazaire, on the southern border of Brittany, had a significantly higher rate of cancers than the Loire countries. The complete study written by Solenne Delacour, Anne Cowppli-Bony, amd Florence Molinié, is quite cautious about the reasons for this higher rate, even using a Bayesian Poisson-Gamma smoothing (and the R package empbaysmooth), and citing the 1991 paper by Besag, York and Mollié, but the local and national medias are quick to blame the local industries for the difference. The graph above is particularly bad in that it accumulates mortality causes that are not mutually exclusive or independent. For instance, the much higher mortality rate due to alcohol is obviously responsible for higher rates of most other entries. And indicates a sociological pattern that may or may not be due to the type of job in the area, but differs from the more rural other parts of the Loire countries. (Which, like Brittany, are already significantly above (50%) the national reference for alcohol related health issues.), and may not be strongly connected to exposition to chemicals. For instance, the rates of pulmonary cancers are mostly comparable to the national average, if higher than the rest of the Loire countries and connect with a high smoking propensity. Lymphomas are not significantly different from the regional reference. The only type of cancer that can be directly attributed to working conditions are the mesothelioma, mostly caused by asbestos exposure, which was used in ship building, a specialty of the area. Among the many possible reasons for the higher mortality of the county, the study mentions a lower exposure to medical testings (connected with the sociological composition of the area). Which would indicate the most effective policies for lowering these higher cancer and mortality rates.

CRAN does not validate R packages!

Posted in pictures, R, University life with tags , , , , , , , , , , on July 10, 2019 by xi'an

A friend called me the other day for advice on how to submit an R package to CRAN along with a proof his method was mathematically sound. I replied with some items of advice taken from my (limited) experience with submitting packages. And with the remark that CRAN would not validate the mathematical contents of the associated package manual. Nor even the validity of the R code towards delivering the right outcome as stated in the manual. This shocked him quite seriously as he thought having a package accepted by CRAN was a stamp of validation of both the method and the R code. It would be nice of course but would require so much manpower that it seems unrealistic. Some middle ground is to aim at a journal or a peer community validation where both code and methods are vetted. Which happens for instance with the Journal of Computational and Graphical Statistics. Or the Journal of Statistical Software (which should revise its instructions to authors that states “The majority of software published in JSS is written in S, MATLAB, SAS/IML, C++, or Java”. S, really?!)

As for the validity of the latest release of R (currently R-3.6.1 which came out on 2019-07-05, named Action of the Toes!), I figure the bazillion R programs currently running should be able to detect any defect pretty fast, although awareness of the incredible failure of sample() reported in an earlier post took a while to appear.