The prior modelling is also rather surprising in that the priors on the means should be joint rather than a product of independent Normals, since these means are compared and hence comparable. For instance a hierarchical prior seems more appropriate, with location and scale to be estimated from the whole data. Creating a connection between the means… A relevant objection to the use of independent improper priors is that the maximum mean μ⁰ then does not have a well-defined measure. However, I do not think a criticism of some priors versus other is a relevant attack on this “paradox”.

]]>*If x,y,z are distinct positive integers such that x+y+z=19 and xyz=p, what is the value of p that has several ordered antecedents?**If the sum is now x+y+z=22, is there a value of p=xyz for which there are several ordered antecedents?*

*If the sum is now larger than 100, is there a value of p with this property?*

**T**he first question is dead easy to code

entz=NULL for (y in 1:5) #y<z<x for (z in (y+1):trunc((18-y)/2)) if (19-y-z>z) entz=c(entz,y*z*(19-y-z))

and return p=144 as the only solution (with ordered antecedents 2 8 9 and 3 4 12). The second question shows no such case. And the last one requires more than brute force exploration! Or the direct argument that a multiple by κ of a non-unique triplet produces a sum multiplied by κ and a product multiplied by κ³. Hence leads to another non-unique triplet with an arbitrary large sum.

]]>During the past week of vacations in Chamonix, I spent some days down-hill skiing (which I find increasingly boring!), X-country skiing (way better), swimming (indoors!) and running, but the highlight (and the number one reason for going there!) was an ice cascade climb with a local guide, Sylvain (from the mythical Compagnie des Guides de Chamonix). There were very options due to the avalanche high risk and Sylvain picked a route called Déferlante at the top of Les Grands Montets cabin stop and next to the end of a small icefield, Glacier d’Argentière. We went there quite early to catch the first cabin up, along a whole horde of badasss skiers and snowboarders, and reached the top of the route by foot first, a wee bit after 9 pm. A second guide and a client appeared before we were ready to abseil down, and two more groups would appear later. On touring skis.

As you can see from the pictures, the view was terrific, with another row of cascades on the other side (too prone to avalanches to consider) and the end-bits of the Argentière glacier squeezed in-between. And a brilliant and brisk day with hardly any wind. We rappelled down first to the bottom of the route, where Sylvain checked my gear and moved up to the middle of the cascade and then it was my turn!

The route was not particularly difficult and the ice of rather good quality, but I found it hard to set my feet steadily enough in the ice after several years off. Since a great weekend on Ben Nevis in 2015. And missing a day out in Banff last year! Anyway at some point I banged my right knee on the ice, which always hurts more than it should (as the knee was not broken!) and after a few more meters up, I ended up doing a vasovagal collapse (as in a flight to Boston two years ago!), meaning I fainted at the end of the rope and came back to my senses with Sylvain holding my head down and the second guide, Élodie, standing next to me..! Nothing particularly scary in retrospect, as ropes are just doing their job!, but definitely an embarrassment. After a few more minutes resting, I went back climbing and ended up the route, if not in the best possible style. Rather than attempting a second route, as we had planned, we then agreed to call it a day and headed back to Chamonix for a coffee. Hopefully not my last ice-climb..!

Here is a video of the same route by another pair:

*[The title of the post was inspired by three fantasy books, A Memory of Light by Brandon Sanderson, Memories of Ice by Sven Eriksson, and Memory, Sorrow, and Thorn, by Tad Williams.]*

*“I have lived with the prospect of an early death for the last 49 years. I’m not afraid of death, but I’m in no hurry to die. I have so much I want to do first. I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.”*

]]>

Thank for discussing our work. Let me clarify the technical point that you raised:

– The difference between* Leg*_{j}(u)_j and *T*_{j}=*Leg*_{j}(G(θ)). One is orthonormal polyn of *L*_{2}[0,1] and the other one is *L*_{2}[G]. The second one is poly of rank-transform *G(θ)*.

– As you correctly pointed out there is a danger in directly approximating the ratio. We work on it after taking the quantile transform: evaluate the ratio at *g⁻¹(θ)*, which is the d(u;G,F) over unit interval. Now, this new transformed function is a proper density.

-Thus the ratio now becomes *d(G(θ))* which can be expended into (NOT in Leg-basis) in , in eq (2.2), as it lives in the Hilbert space *L*_{2}(G)

– For your last point on Step 2 of our algo, we can also use the simple integrate command.

-Unlike traditional prior-data conflict here we attempted to answer three questions in one-shot: (i) How compatible is the pre-selected g with the given data? (ii) In the event of a conflict, can we also inform the user on the nature of misfit–finer structure that was a priori unanticipated? (iii) Finally, we would like to provide a simple, yet formal guideline for upgrading (repairing) the starting *g*.

Hopefully, this will clear the air. But thanks for reading the paper so carefully. Appreciate it.

]]>

“To develop a “defendable and defensible” Bayesian learning model, we have to go beyond blindly ‘turning the crank’ based on a “go-as-you-like” [approximate guess] prior. A lackluster attitude towards prior modeling could lead to disastrous inference, impacting various fields from clinical drug development to presidential election forecasts. The real questions are: How can we uncover the blind spots of the conventional wisdom-based prior? How can we develop the science of prior model-building that combines both data and science [DS-prior] in a testable manner – a double-yolk Bayesian egg?”

**I** came through R bloggers on this presentation of a paper by Subhadeep Mukhopadhyay and Douglas Fletcher, Bayesian modelling via goodness of fit, that aims at solving all existing problems with classical Bayesian solutions, apparently! (With also apparently no awareness of David Spiegelhalter’s take on the matter.) As illustrated by both quotes, above and below:

“The two key issues of modern Bayesian statistics are: (i) establishing principled approach for distilling statistical prior that is consistent with the given data from an initial believable scientific prior; and (ii) development of a Bayes-frequentist consolidated data analysis work ow that is more effective than either of the two separately.”

(I wonder who else in this Universe would characterise “modern Bayesian statistics” in such a non-Bayesian way! And love the notion of distillation applied to priors!) The setup is actually one of empirical Bayes inference where repeated values of the parameter θ drawn from the prior are behind independent observations. Which is not the usual framework for a statistical analysis, where a single value of the parameter is supposed to hide behind the data, but most convenient for frequency based arguments behind empirical Bayes methods (which is the case here). The paper adopts a far-from-modern discourse on the “truth” of “the” prior… (Which is always conjugate in that Universe!) Instead of recognising the relativity of a statistical analysis based on a given prior.

When I tried to read the paper any further, I hit a wall as I could not understand the principle described therein. And how it “consolidates Bayes and frequentist, parametric and nonparametric, subjective and objective, quantile and information-theoretic philosophies.”. Presumably the lack of oxygen at the altitude of Chamonix…. Given an “initial guess” at the prior, g, a conjugate prior (in dimension one with an invertible cdf), a family of priors is created in what first looks like a form of non-parametric exponential tilting of g. But a closer look [at (2.1)] exposes the “family” as the tautological π(θ)=g(θ)x π(θ)/g(θ). The ratio is expanded into a Legendre polynomial series. Which use in Bayesian statistics dates a wee bit further back than indicated in the paper (see, e.g., Friedman, 1985; Diaconis, 1986). With the side issue that the resulting approximation does not integrate to one. Another side issue is that the coefficients of the Legendre truncated series are approximated by simulations from the prior [Step 3 of the Type II algorithm], rarely an efficient approach to the posterior.

]]>