as probability kinematics or other non-static rule. This could

deserve further research , one of course being advised that

no Coherence Theorem will ever arise (see Phil Dawid on robots). ]]>

no, it must be somewhere in my drafts..!

]]>Dear xi’an – did you ever write that follow up post? I find myself unable to locate it

]]>My dear colleagues

Let me enter in the discussion for a little pragmatic opinion

Professor Kemp once told me that we would be successful with Bayeainism only if we could get an alternative for p-values.

My first try was the P-value that I and Sergio published in our Brazilian Journal some years ago. Hence after we understand the problems related to Lindley paradox, I and Julio wrote about the e-value in 1999. In such index we only need a density function to compute it. Do not have to talk about data in such step. After that Sergio and his colleagues wrote about the decision method to test of significance and maybe have to use data a second time. Julio on the other side wrote about the philosophical aspects on the use of e-values, including the composition of different hypotheses on the same space. Both did a superb job in divulgate our e-value. However, I myself have not paid much attention to those other aspects when using the e-values in real problems. I leave the decision to the scientists as Cox and Kemp did in their work. My problem at these times are how to define the significance level and am trying to define it just in the way I and Pericchi have done recently. I still could not see the lights to have an alternative way to e-values in the way have done in the last paper with Luis. In fact Julio and Marcelo had a nice way in a specific problem for Hardy-Weinberg equilibrium. So, I do not use data for nothing after getting my main object, a density that represents the opinion of a scientist at the moment they want my work – clearly I help them to get posteriors –.

Good new year for all of you.

Thanks for your additional comments. I am writing a second post to try to express my “concerns” more clearly. In the meanwhile, Feliz Ano Novo!

]]>Dear Robert: I would like to make an additional comment: In the article M.R. Madruga, L.G. Esteves and S. Wechsler (2001), On the Bayesianity of Pereira-Stern Tests, Test, Test, 10, 2, 291-299; the authors prove that the FBST can be derived from a well specified Loss function. I am always looking for alternative interpretations and different epistemological frameworks, but Sergio’s (and his co-authors) theorem demonstrates that this is not strictly necessary. The FBST can be fully understood within the scope of standard Bayesian statistics. As for using the data twice, I believe that, once again, Sergio is right. Fubini’s theorem, allowing the change of integration order, can be used to show the equivalence of normal and extensive forms of the implied decision rules. I believe that this is enough to answer your questions and solve your concerns.

]]>Crowd,

I believe we are dancing around a

frequentist superstitions here.

Let´s be Bayesian at least during these last 2 days of

the year:

If H has prior probability zero, it also has

zero probability posterior to any data used once,

twice, or to any finite number of repetitions.

Par contre, if H has positive probability, one a fortiori sticks to

Probability Calculus and avoids FBST. There will not

be any P(theta given x and x) different from P(theta given x).

So even if one uses data “twice”, the definition of conditional

probability keeps him coherent.

A complication may arise if opinion updating is modelled

as probability kinematics or other non-static rule. This could

deserve further research , one of course being advised that

no Coherence Theorem will ever arise (see Phil Dawid on robots).

Another issue is the Likelihood Principle being violated or not

by the use of model-dependent official priors. It may be argued

that , technically, the LP is not violated, as it says that inferences under proportional likelihoods must be the same *for any fixed prior*.

So what? One can then follow the LP while being incoherent.

De Finetti sort of made this point when he snubbed the Likelihood

Principle as a trivial and “obvious” property of Bayesian Inference.

Bonne Année à tous

sw

]]>About the difficulties of performing the Optimization step of the e-value, namely, maximizing a proper density (or surprise) function over a sharp hypothesis H (presented as a regular proper sub-manifold of the parameter space):

Technical difficulty: Easy to moderate. Optimizing a proper density or surprise function is easy — as long as one has good constrained optimization software available. Several of our articles with applications of the FBST discuss good optimization techniques. Obs: Obtaining a non-zero measure over the sharp H would be very tricky indeed (leading to all kind of problems, like Lindley’s paradox), but that is precisely the difficulty that the e-value avoids!

Theoretical difficulty: Moderate. It requires the user to understand that the e-value is NOT related to the posterior probability of H, nor to a ratio of probabilities of H and its complement, etc. Instead, it requires the user to understand that the e-values is a Possibility measure of H. Furthermore, the user must be able to understand that the e-value, ev(H|X), has the desired properties of Consistency with its underlying posterior probability measure, p_n(t), and Conformity with its underlying surprise function, s(t) = p_n(t) / r(t). Our two papers at the Logic Journal of the IGPL cover these topics with great care.

Epistemological difficulty: Moderate to great. Each well constructed statistical significance measure must be escorted or accompanied by a suitable epistemological framework, for example: p-values are escorted by Popperian Falsificationism, and the corresponding metaphor of the Scientific Tribunal. Bayes factors are escorted by deFinettian Decision Theory, and the corresponding Betting-Odds metaphor. Several articles cited at our articles at the Logic Journal of the IGPL discuss the Cognitive Constructivism epistemological framework and the corresponding metaphor of Objects as Eigen-Solutions. Obs: Usually, the practice of statistics only requires very superficial epistemological discussions, if any, although sparse epistemological arguments are often used to justify some key theoretical properties required from statistical procedures.

Dear Robert:

When you say that the procedure “…depends on the Data AND the Likelihood function”, it gives the impression that it depends on the data in a second way other than the likelihood function. This would be a violation of the Likelihood Principle. This is not the case: The e-value (and the FBST) are strictly conformant with the Likelihood Principle (something that many pseudo-Bayesian procedures floating around fail to accomplish). Hence, it seems that your objection boils down to my Point (2c) — … just because the e-value is built in a two-step procedure: A Maximization step followed by an Integration step.

In this case, I am afraid that the commandment “thou shalt use the data once” (in a single integration step) becomes so restrictive that it precludes anything outside the established orthodoxy, that is, it cannot be considered a foundational principle, becoming instead a normative law or requirement for canonical procedures. The e-value is indeed very different from a Bayes Factor. If that is the bulk of you objection, we are in full agreement!