I am a professor of Statistics at both Université Paris Dauphine, Paris, France, and University of Warwick, Coventry, United Kingdom, with a definitely unhealthy (but so far not fatal) fascination for mountains and (easy) climbing, in particular for Scotland in Winter, an almost daily run, and a reading list mainly centred on fantasy books… Plus an addiction to bloggin’ since 2008! Hence the categories on this blog (or ‘og, because ‘log and b’og did not sound good). The Statistics posts do mainly focus on computational and Bayesian topics, on papers or preprints I find of interest (or worth criticising), and on the (not so) occasional trip abroad to a research centre or to a conference.

Needless to say (?), this blog is not approved by, supported by, or in any other way affiliated with the Université Paris Dauphine, CREST-INSEE, University of Warwick, or any other organization, and it only reflects my opinions. This is also one of the reasons why it is posted on wordpress rather than on my University webpage, another one being that wordpress provides a handy (if sometimes slow) tool for editing blogs…

45 Responses to “About”

  1. Prof xi’an

    I am driving the full conditional of β

    Y follows Bernoulli distribution (p)

    Convolution model is given as: logit(p)=XB+u+v ;

    u is spatial random effects and v is non spatial random effects

    The posterior distribution is given as:

    Ṗ(u, v, K, λ, β І y) ≈Likelihood*structured car prior* unstructured exchangeable prior* Normal priors* hyper priors

    Likelihood=∏ (ni yi)piyi (1-pi)ni-yi
    β follows normal distr.
    My question is, what I have to replace by this likelihood if I want to drive the conditional distribution of β ? P(β|.)=?
    Please, how?

  2. […] tip to Professor Christian Robert for pointing out this article at his […]

  3. Alina Twalski Says:


    I came across your blog after reading about your answers from this question(https://stats.stackexchange.com/questions/22749/how-to-compute-importance-sampling). I am an undergraduate just getting to learn importance sampling and rejection sampling. I am having a very hard time grasping how to construct an importance sampler.

    Given that X~t dist with v=1, and y=P(X>1000).
    I want to know what it means to construct an importance sampler which estimates y. I really want to understand the question but am having a hard time…Any help would be appreciated.

    Thanks so much!

  4. Hi,
    I reached your blog after following your great replies like the one shown in [0] . I have a similar challenge, namely sampling spheres with a given volume distribution and with fixed total volume [1], so your replies have been helpful to start clarifying my problem. Still I am a bit lost at the notation and tools used. I more or less translated your R code to Python to understand the solution, but the notation still evades me. Since I am a newbie in the whole field of probability and much more on MCMC and related, could you please point me to some bibliography where I can learn more and get familiar with the topics? thanks a lot.
    [0] https://stats.stackexchange.com/questions/244776/how-to-sample-from-a-distribution-so-that-mean-of-samples-equals-expected-value?noredirect=1&lq=1
    [1] https://math.stackexchange.com/questions/2838119/how-to-efficiently-sample-data-from-a-known-cumulative-distribution-of-a-functi/2838603#2838603

  5. Dear Xi’an,

    I see that you reviewed the book The Slow Regard of Silent Things (Kingkiller) before. I have written a book that is similar to that. Would you be willing to let me provide you with a copy of the book in hopes that you would consider reviewing my book as well?

    My name is Charles D. Shell, and the book I want to send is titled Blood Calls. You can find a link to it here. (https://www.amazon.com/Blood-Calls-History-Book-ebook/dp/B00COJPCHQ/ref=asap_bc?ie=UTF8)

    I can provide it to you as whatever digital file you wish. It’s up to you.
    Of course, I understand that you are under no obligation to review my book, and if you do review it, all I ask is that you leave an honest review. I am simply looking for the opportunity to have you consider it.

    Thank you. I look forward to your response.


    Charles D. Shell

    • Dear Charles, congratulations and thank you for the proposal. I do not read digital books, unfortunately, as I feel my reading time is a way to get away from the computer! I wish you good luck with your book. Best,

  6. […] Gelman and Christian Robert respond to E.J. Wagenmakers […]

  7. […] and Data Science that posts regularly in the unusually named blog Xi’ an’s OG. Here it is a brief biographical description of the author of this Blog, which sports a somewhat mysterious identity style of […]

  8. Benjamin Zhao Says:

    Hi Xi’an,

    I am wondering why your name coincides with the name of a Chinese city (with a long history)? Is it a coincidence or something deliberate?


    • I use this abbreviation to my first name as in the US X’mas is sometimes used to abbreviate Christmas. And the analogy with the historical Chinese city explains for the drift from X’ian to Xi’an.

  9. Hi Xi’an

    I was wondering if you would be willing to do a brief feature of Datazar on your blog. Of course, we would happily return the favor by sending your blog out in our weekly newsletter & on twitter that reaches 5,000+ focused users. Let me know if you would be interested in this and we can set something up.


  10. Thank you for maintaining a very informative blog. I am student of cognitive science, and I have been learning ABC for modeling driver behaviors in collision imminent situations. I have been, recently, trying to find algorithms that can sample two (or more) parameter values from the priors to apply to one iteration of the model simulation, at different stochastic points in time, to indicate that the outcome behavior (say deceleration applied by the driver) changes over time, maybe as the lead-vehicle in a rear-end collision, applies increasing braking.
    I was curious if you had come across (or develop) some of these ABC algorithms that can change parameter values in one time series. The closest my search took me to was particle MCMC algorithms, and another called ABC simulated likelihood density method.
    Even if you don’t have time to reply, thanks for the amazing knowleddge sharing you facilitate!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: