Archive for exponential families

mean simulations

Posted in Books, Statistics with tags , , , , , , , , on May 10, 2023 by xi'an


A rather intriguing question on X validated, namely a simulation approach to sampling a bivariate distribution fully specified by one conditional p(x|y) and the symmetric conditional expectation IE[Y|X=x]. The book Conditional Specification of Statistical Models, by Arnold, Castillo and Sarabia, as referenced by and in the question, contains (§7.7) illustrations of such cases. As for instance with some power series distribution on ℕ but also for some exponential families (think Laplace transform). An example is when

P(X=x|Y=y) = c(x)y^x/c^*(y)\quad c(x)=\lambda^x/x!

which means X conditional on Y=y is exponential E(λy). The expectation IE[Y|X=x] is then sufficient to identify the joint. As I figured out before checking the book, this result is rather immediate to establish by solving a linear system, but it does not help in finding a way to simulating the joint. (I am afraid it cannot be connected to the method of simulated moments!)

 

latest math stats exam

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , , on January 28, 2023 by xi'an


As I finished grading our undergrad math stats exam (in Paris Dauphine) over the weekend, which was very straightforward this year, the more because most questions had already been asked on weekly quizzes or during practicals, some answers stroke me as atypical (but ChatGPT is not to blame!). For instance, in question 1, (c) received a fair share of wrong eliminations as g not being necessarily bounded. Rather than being contradicted by (b) being false. (ChatGPT managed to solve that question, except for the L² convergence!)

Question 2 was much less successful than we expected, most failures due to a catastrophic change of parameterisation for computing the mgf that could have been ignored given this is a Bernoulli model, right?! Although the students wasted quite a while computing the Fisher information for the Binomial distribution in Question 3… (ChatGPT managed to solve that question!)

Question 4 was intentionally confusing and while most (of those who dealt with the R questions) spotted the opposition between sample and distribution, hence picking (f), a few fell into the trap (d).

Question 7 was also surprisingly incompletely covered by a significant fraction of the students, as they missed the sufficiency in (c). (ChatGPT did not manage to solve that question, starting with the inverted statement that “a minimal sufficient statistic is a sufficient statistic that is not a function of any other sufficient statistic”…)

And Question 8 was rarely complete, even though many recalled Basu’s theorem for (a) [more rarely (d)] and flunked (c). A large chunk of them argued that the ancilarity of statistics in (a) and (d) made them [distributionally] independent of μ, therefore [probabilistically] of the empirical mean! (Again flunked by ChatGPT, confusing completeness and sufficiency.)

deGPTed

Posted in Books, Kids, Statistics, University life with tags , , , , , on December 20, 2022 by xi'an

As shown above, automated chatbots are becoming a nuisance on fori such as Stack Exchange. To illustrate the nuisance capacity, here is a question / answer I produced there:


It sounds completely correct except for the core issue of not explaining why the Uniform density is not expressible as an exponential… And the answer is exactly the same when substituting Gamma for Uniform!

likelihood inference with no MLE

Posted in Books, R, Statistics with tags , , , , on July 29, 2021 by xi'an

“In a regular full discrete exponential family, the MLE for the canonical parameter does not exist when the observed value of the canonical statistic lies on the boundary of its convex support.”

Daniel Eck and Charlie Geyer just published an interesting and intriguing paper on running efficient inference for discrete exponential families when the MLE does not exist.  As for instance in the case of a complete separation between 0’s and 1’s in a logistic regression model. Or more generally, when the estimated Fisher information matrix is singular. Not mentioning the Bayesian version, which remains a form of likelihood inference. The construction is based on a MLE that exists on an extended model, a notion which I had not heard previously. This model is defined as a limit of likelihood values

\lim_{n\to\infty} \ell(\theta_n|x) = \sup_\theta \ell(\theta|x) := h(x)

called the MLE distribution. Which remains a mystery to me, to some extent. Especially when this distribution is completely degenerate. Examples provided within the paper alas do not help, as they mostly serve as illustration for the associated rcdd R package. Intriguing, indeed!

 

conjugate priors and sufficient statistics

Posted in Statistics with tags , , , , , on March 29, 2021 by xi'an

An X validated question rekindled my interest in the connection between sufficiency and conjugacy, by asking whether or not there was an equivalence between the existence of a (finite dimension) conjugate family of priors and the existence of a fixed (in n, the sample size) dimension sufficient statistic. Outside exponential families, meaning that the support of the sampling distribution need vary with the parameter.

While the existence of a sufficient statistic T of fixed dimension d whatever the (large enough) sample size n seems to clearly imply the existence of a (finite dimension) conjugate family of priors, or rather of a family associated with each possible dominating (prior) measure,

\mathfrak F=\{ \tilde \pi(\theta)\propto \tilde {f_n}(t_n(x_{1:n})|\theta) \pi_0(\theta)\,;\ n\in \mathbb N, x_{1:n}\in\mathfrak X^n\}

the reverse statement is a wee bit more delicate to prove, due to the varying supports of the sampling or prior distributions. Unless some conjugate prior in the assumed family has an unrestricted support, the argument seems to limit sufficiency to a particular subset of the parameter set. I think that the result remains correct in general but could not rigorously wrap up the proof

%d bloggers like this: