**A**s it happens, the next MaxEnt conference will happens in London, on 2-6 July, at the Alan Turing Institute, which makes it another perfect continuation of the ISBA meeting in Edinburgh, or of the Computational Statistics summer school in Warwick the week after. But in competition with BAYsm in Warwick and MCqMC in Rennes. I once attended a MaxEnt meeting in Oxford. (Oxford, Mississippi!) Which was quite interesting in the audience it attracted and the focus of the discussions, some of which were exhilaratingly philosophical!

## Archive for MaxEnt

## another instance of a summer of Bayesian conferences

Posted in pictures, Statistics, Travel, University life with tags Alan Turing Institute, Bayesian conference, Britain, Edinburgh, ISBA 2018, London, MaxEnt, MCqMC 2018, Oxford (Mississipi), summer school on March 15, 2018 by xi'an## Bureau international des poids et mesures [bayésiennes?]

Posted in pictures, Statistics, Travel with tags admissibility, Bayesian inference, Bureau international des poids et mesures, confidence intervals, conventions, France, frequentist inference, MaxEnt, norms, Paris, Pavillon de Breteuil, Sèvres, subjective versus objective Bayes, workshop on June 19, 2015 by xi'an**T**he workshop at the BIPM on measurement uncertainty was certainly most exciting, first by its location in the Parc de Saint Cloud in classical buildings overlooking the Seine river in a most bucolic manner…and second by its mostly Bayesian flavour. The recommendations that the workshop addressed are about revisions in the current GUM, which stands for the Guide to the Expression of Uncertainty in Measurement. The discussion centred on using a more Bayesian approach than in the earlier version, with the organisers of the workshop and leaders of the revision apparently most in favour of that move. “Knowledge-based pdfs” came into the discussion as an attractive notion since it rings a Bayesian bell, especially when associated with probability as a degree of belief and incorporating the notion of an a priori probability distribution. And propagation of errors. Or even more when mentioning the removal of frequentist validations. What I gathered from the talks is the perspective drifting away from central limit approximations to more realistic representations, calling for Monte Carlo computations. There is also a lot I did not get about conventions, codes and standards. Including a short debate about the different meanings on Monte Carlo, from simulation technique to calculation method (as for confidence intervals). And another discussion about replacing the old formula for estimating sd from the Normal to the Student’s ** t **case. A change that remains highly debatable since the Student’s

**assumption is as shaky as the Normal one. What became clear [to me] during the meeting is that a rather heated debate is currently taking place about the need for a revision, with some members of the six (?) organisations involved arguing against Bayesian or linearisation tools.**

*t*This became even clearer during our frequentist versus Bayesian session with a first talk so outrageously anti-Bayesian it was hilarious! Among other things, the notion that “fixing” the data was against the principles of physics (the speaker was a physicist), that the only randomness in a Bayesian coin tossing was coming from the prior, that the likelihood function was a subjective construct, that the definition of the posterior density was a generalisation of Bayes’ theorem [generalisation found in… Bayes’ 1763 paper then!], that objective Bayes methods were inconsistent [because Jeffreys’ prior produces an inadmissible estimator of μ²!], that the move to Bayesian principles in GUM would cost the New Zealand economy 5 billion dollars [hopefully a frequentist estimate!], &tc., &tc. The second pro-frequentist speaker was by comparison much much more reasonable, although he insisted on showing Bayesian credible intervals do not achieve a nominal frequentist coverage, using a sort of fiducial argument distinguishing x=X+ε from X=x+ε that I missed… A lack of achievement that is fine by my standards. Indeed, a frequentist confidence interval provides a coverage guarantee either for a fixed parameter (in which case the Bayesian approach achieves better coverage by constant updating) or a varying parameter (in which case the frequency of proper inclusion is of no real interest!). The first Bayesian speaker was Tony O’Hagan, who summarily shred the first talk to shreds. And also criticised GUM2 for using reference priors and maxent priors. I am afraid my talk was a bit too exploratory for the audience (since I got absolutely no question!) In retrospect, I should have given an into to reference priors.

An interesting specificity of a workshop on metrology and measurement is that they are hard stickers to schedule, starting and finishing right on time. When a talk finished early, we waited until the intended time to the next talk. Not even allowing for extra discussion. When the only overtime and Belgian speaker ran close to 10 minutes late, I was afraid he would (deservedly) get lynched! He escaped unscathed, but may (and should) not get invited again..!

## bioinformatics workshop at Pasteur

Posted in Books, Statistics, University life with tags bioinformatics, John Burdon Sanderson Haldane, MaxEnt, maximum entropy, protein folding on September 23, 2013 by xi'an**O**nce again, I (did) find myself attending lectures on a Monday! This time, it was at the Institut Pasteur, (where I did not spot any mention of Alexandre Yersin) in the bioinformatics unit, around Bayesian methods in computational biology. The workshop was organised by Michael Nilges and the program started as follows:

9:10 AM Michael Habeck (MPI Göttingen) Bayesian methods for cryo-EM

9:50 AM John Chodera (Sloan-Kettering research institute) Toward Bayesian inference of conformational distributions, analysis of isothermal titration calorimetry experiments, and forcefield parameters

11:00 AM Jeff Hoch (University of Connecticut Health Center) Haldane, Bayes, and Reproducible Research: Bedrock Principles for the Era of Big Data

11:40 AM Martin Weigt (UPMC Paris) Direct-Coupling Analysis: From residue co-evolution to structure prediction

12:20 PM Riccardo Pellarin (UCSF) Modeling the structure of macromolecules using cross-linking data

2:20 PM Frederic Cazals (INRIA Sophia-Antipolis) Coarse-grain Modeling of Large Macro-Molecular Assemblies: Selected Challenges

3:00 PM Yannick Spill (Institut Pasteur) Bayesian Treatment of SAXS Data

3:30 PM Guillaume Bouvier (Institut Pasteur) Clustering protein conformations using Self-Organizing Maps

This is a highly interesting community, from which stemmed many of the MC and MCMC ideas, but I must admit I got lost (in translation) most of the time (and did not attend the workshop till its end), just like when I attended this workshop at the German synchrotron in Hamburg last Spring: some terms and concepts were familiar like Gibbs sampling, Hamiltonian MCMC, HMM modelling, EM steps, maximum entropy priors, reversible jump MCMC, &tc., but the talks were going too fast (for me) and focussed instead on the bio-chemical aspects, like protein folding, entropy-enthalpy, free energy, &tc. So the following comments mostly reflect my being alien to this community…

**F**or instance, I found the talk by John Chodera quite interesting (in a fast-forward high-energy/content manner), but the probabilistic modelling was mostly absent from his slides (and seemed to reduce to a Gaussian likelihood) and the defence of Bayesian statistics sounded a bit like a mantra at times (something like *“put a prior on everything you do not know and everything will end up fine with enough simulations”*), a feature I once observed in the past with Bayesian ideas coming to a new field (although this hardly seems to be the case here).

**A**ll talks I attended mentioned maximum entropy as a way of modelling, apparently a common tool in this domain (as there were too little details for me). For instance, Jeff Hoch’s talk remained at a very general level, referring to a large literature (incl. David Donoho’s) for the advantages of using MaxEnt deconvolution to preserve sensitivity. (The “Haldane” part of his talk was about Haldane —who moved from UCL to the ISI in Calcutta— writing a parody on how to fake genetic data in a convincing manner. And showing the above picture.) Although he linked them with MaxEnt principles, Martin Weigt’s talk was about Markov random fields modelling contacts between amino acids in the protein, but I could not get how the selection among the huge number of possible models was handled: To me it seemed to amount to estimate a graphical model on the protein, as it also did for my neighbour. (No sign of any ABC processing in the picture.)

## MaxEnt 2013, Canberra, Dec. 15-20

Posted in Mountains, pictures, Statistics, Travel, University life with tags Australia, Canberra, conference, E.T. Jaynes, MaxEnt, maximum entropy, O'Bayes, Oxford (Mississipi) on July 3, 2013 by xi'an**J**ust got this announcement that MaxEnt 2013, *33ième du genre*, is taking place in Canberra, Australia, next December. (Which is winter here but summer there!) See the website for details, although they are not yet aplenty! I took part in MaxEnt 2009, in Oxford, Mississipi, but will not attend MaxEnt 2013 as it is (far away and) during O-Bayes 2013 in Duke…

## Seminar of philosophy [ex-post]

Posted in Books, pictures, Statistics with tags Evidence and Evolution, MaxEnt, objective Bayes, philosophy of sciences, Rue Watt, Université Paris-Diderot on December 1, 2010 by xi'an**Y**esterday, I gave my talk at the Seminar of philosophy of mathematics at Université Paris Diderot, in this new district of Paris where I always get lost because construction work continuously modifies the topology of the place. (This year, I ended up biking the mythical Rue Watt which has been beautifully renovated.) I managed nonetheless to get there in time and talked about Bayesian model choice and of the difficulties with Murray Aitkin’s proposal. The talk was presumably much too mathematical and not philosophical enough, but it was followed by a discussion launched by the two following speakers, Jan Sprenger and Bengt Autzen. Due to teaching duties, I could only attend the talk by Jan Sprenger, who covered the philosophical aspects of the difficulty in defining objective Bayesian inference, alas missing both Bengt’s and Steve Fienberg’s talks… He mostly focussed on MaxEnt priors, with an interesting counterexample by Teddy Seidenfeld, but also mentioned reference priors as suffering from the same difficulties. From my (non-philosophical) perspective, I consider that MaxEnt priors are beyond in terms of objectivity, because they first require the definition of a reference measure for the (entropy) divergence to be defined. During the talk, Jan also mentioned the book ** In Defence of Objective Bayesianism** by Jon Williamson, which I will try to read (and comment) in the coming months. I just had a few words with who told me he had worked on Seber’s

**as part of his PhD thesis, so I wished we had had more time to chat about that! (Steve has proposed to give his talk at the students’ seminar here in CREST so that we can discuss effects of causes versus causes of effects.)**

*Evidence and Evolution*