## About

**I** am a professor of Statistics at both Université Paris Dauphine, Paris, France, and University of Warwick, Coventry, United Kingdom, with a definitely unhealthy (but so far not fatal) fascination for mountains and (easy) climbing, in particular for Scotland in Winter, an almost daily run, and a reading list mainly centred on fantasy books… Plus an addiction to bloggin’ since 2008! Hence the categories on this blog (or ‘og, because ‘log and b’og did not sound good). The Statistics posts do mainly focus on computational and Bayesian topics, on papers or preprints I find of interest (or worth criticising), and on the (not so) occasional trip abroad to a research centre or to a conference.

**N**eedless to say (?), this blog is not approved by, supported by, or in any other way affiliated with the Université Paris Dauphine, CREST-INSEE, University of Warwick, or any other organization, and it only reflects my opinions. This is also one of the reasons why it is posted on wordpress rather than on my University webpage, another one being that wordpress provides a handy (if sometimes slow) tool for editing blogs…

May 2, 2021 at 8:37 am

Prof xi’an

I am driving the full conditional of β

Y follows Bernoulli distribution (p)

Convolution model is given as: logit(p)=XB+u+v ;

u is spatial random effects and v is non spatial random effects

The posterior distribution is given as:

Ṗ(u, v, K, λ, β І y) ≈Likelihood*structured car prior* unstructured exchangeable prior* Normal priors* hyper priors

Likelihood=∏ (ni yi)piyi (1-pi)ni-yi

β follows normal distr.

My question is, what I have to replace by this likelihood if I want to drive the conditional distribution of β ? P(β|.)=?

Please, how?

May 2, 2021 at 12:30 pm

This is not really the proper setting to answer the question! A quick if unhelpful answer is that the conditional posterior on β includes all terms from the posterior where β appears. I suggest you ask the question on Cross Validated, where you can format the maths formula properly.

May 3, 2021 at 11:25 am

Okay Prof,

I asked the question on Cross Validated

May 4, 2021 at 9:06 am

Please provide the link.

May 4, 2021 at 11:44 am

Prof, this is the link

https://stats.stackexchange.com/questions/522332/to-drive-conditional-distribution-of-%ce%b2

thank you

December 14, 2019 at 10:59 pm

[…] tip to Professor Christian Robert for pointing out this article at his […]

November 16, 2018 at 6:33 am

Hi,

I came across your blog after reading about your answers from this question(https://stats.stackexchange.com/questions/22749/how-to-compute-importance-sampling). I am an undergraduate just getting to learn importance sampling and rejection sampling. I am having a very hard time grasping how to construct an importance sampler.

Ex)

Given that X~t dist with v=1, and y=P(X>1000).

I want to know what it means to construct an importance sampler which estimates y. I really want to understand the question but am having a hard time…Any help would be appreciated.

Thanks so much!

November 16, 2018 at 7:25 am

so far, I’ve have this :

t dist v=1 pdf looks like :

1/[pi * (1+x^2)]

P(X>1000)=integral(1000,infinity, 1/[pi * (1+x^2)])

I am trying to approximate this integral using importance sampling. From what I’ve read it looks like I need to come up with a g1 and g2 proposal distributions…my understanding

November 16, 2018 at 3:52 pm

it would be most convenient if you could post your question on cross validated, both for describing the problem in full details and in getting experts’ opinions!

November 16, 2018 at 6:58 pm

Thank you! I posted it here. https://stats.stackexchange.com/questions/377384/importance-sampling-t-distribution

July 20, 2018 at 3:59 pm

Hi,

I reached your blog after following your great replies like the one shown in [0] . I have a similar challenge, namely sampling spheres with a given volume distribution and with fixed total volume [1], so your replies have been helpful to start clarifying my problem. Still I am a bit lost at the notation and tools used. I more or less translated your R code to Python to understand the solution, but the notation still evades me. Since I am a newbie in the whole field of probability and much more on MCMC and related, could you please point me to some bibliography where I can learn more and get familiar with the topics? thanks a lot.

[0] https://stats.stackexchange.com/questions/244776/how-to-sample-from-a-distribution-so-that-mean-of-samples-equals-expected-value?noredirect=1&lq=1

[1] https://math.stackexchange.com/questions/2838119/how-to-efficiently-sample-data-from-a-known-cumulative-distribution-of-a-functi/2838603#2838603

July 20, 2018 at 4:17 pm

George Casella and I have written a textbook on Monte Carlo methods called Monte Carlo Statistical methods so this would be my first entry!

May 1, 2021 at 4:38 pm

Thank you Prof xi’an. You are doing a great job

Please, can I get your email?

I want to ask you a question on ” Driving conditional distribution” in hierarchical Bayesian model.

May 1, 2021 at 5:28 pm

It should not be too hard to find my email address!

May 24, 2018 at 6:28 pm

Dear Xi’an,

I see that you reviewed the book The Slow Regard of Silent Things (Kingkiller) before. I have written a book that is similar to that. Would you be willing to let me provide you with a copy of the book in hopes that you would consider reviewing my book as well?

My name is Charles D. Shell, and the book I want to send is titled Blood Calls. You can find a link to it here. (https://www.amazon.com/Blood-Calls-History-Book-ebook/dp/B00COJPCHQ/ref=asap_bc?ie=UTF8)

I can provide it to you as whatever digital file you wish. It’s up to you.

Of course, I understand that you are under no obligation to review my book, and if you do review it, all I ask is that you leave an honest review. I am simply looking for the opportunity to have you consider it.

Thank you. I look forward to your response.

Sincerely,

Charles D. Shell

May 24, 2018 at 6:42 pm

Dear Charles, congratulations and thank you for the proposal. I do not read digital books, unfortunately, as I feel my reading time is a way to get away from the computer! I wish you good luck with your book. Best,

Christian

December 14, 2017 at 9:17 am

[…] Gelman and Christian Robert respond to E.J. Wagenmakers […]

May 10, 2017 at 1:08 pm

[…] and Data Science that posts regularly in the unusually named blog Xi’ an’s OG. Here it is a brief biographical description of the author of this Blog, which sports a somewhat mysterious identity style of […]

April 11, 2017 at 4:57 am

Hi Xi’an,

I am wondering why your name coincides with the name of a Chinese city (with a long history)? Is it a coincidence or something deliberate?

Cheers,

Benjamin

April 11, 2017 at 8:29 am

I use this abbreviation to my first name as in the US X’mas is sometimes used to abbreviate Christmas. And the analogy with the historical Chinese city explains for the drift from X’ian to Xi’an.

January 13, 2017 at 7:52 pm

Hi Xi’an

I was wondering if you would be willing to do a brief feature of Datazar on your blog. Of course, we would happily return the favor by sending your blog out in our weekly newsletter & on twitter that reaches 5,000+ focused users. Let me know if you would be interested in this and we can set something up.

Best,

Brian

January 14, 2017 at 11:03 am

Sure, tell me more about Datazar!

January 14, 2017 at 9:20 pm

Sure thing! Datazar is a community-based research collaboration platform where anyone can host, share, and perform analysis on data using tools like R, SQL and D3, all in the cloud. It also enables a community of open, and modular science.

There’s a pretty informative platform page here: https://www.datazar.com/about/platform/

…and more about our mission, here: https://www.datazar.com/about/

January 21, 2017 at 12:22 am

If this is something you would be interested, please email me back at Brian [at] datazar.com !!!

December 22, 2016 at 11:09 pm

Thank you for maintaining a very informative blog. I am student of cognitive science, and I have been learning ABC for modeling driver behaviors in collision imminent situations. I have been, recently, trying to find algorithms that can sample two (or more) parameter values from the priors to apply to one iteration of the model simulation, at different stochastic points in time, to indicate that the outcome behavior (say deceleration applied by the driver) changes over time, maybe as the lead-vehicle in a rear-end collision, applies increasing braking.

I was curious if you had come across (or develop) some of these ABC algorithms that can change parameter values in one time series. The closest my search took me to was particle MCMC algorithms, and another called ABC simulated likelihood density method.

Even if you don’t have time to reply, thanks for the amazing knowleddge sharing you facilitate!

December 23, 2016 at 4:23 am

Thanks! I will get a look at this problem when I am back from India.