top model choice week

La Défense and Maison-Lafitte from my office, Université Paris-Dauphine, Nov. 05, 2011Next week, we are having a special Bayesian [top] model choice week in Dauphine, thanks to the simultaneous visits of Ed George (Wharton), Feng Liang (University of Illinois at Urbana-Champaign), and Veronika Rockovà (Erasmus University). To start the week and get to know the local actors (!), Ed and Feng both give a talk on Tuesday, June 18, at 11am and 1pm in Room C108. Here are the abstracts:

11am: Prediction and Model Selection for Multi-task Learning by Feng Liang

In multi-task learning one simultaneoulsy fits multiple regression models. We are interested in inference problems like model selection and prediction when there are a large number of tasks. A simple version of such models is a one-way ANOVA model where the number of replicates is fixed but the number of groups goes to infinity. We examine the consistency of Bayesian procedures using Zellner (1986)’s g-prior and its variants (such as mixed g-priors and Empirical Bayes), and compare their prediction accuracy with other procedures, such as the ones based AIC/BIC and group Lasso. Our results indicate that the Empirical Bayes procedure (with some modification for the large p small n setting) can achieve model selection consistency, and also have better estimation accuracy than other procedures being considered. During my talk, I’ll focus on the analysis on the one-way ANOVA model, but will also give a summary on our findings for multi-tasking learning invovling a more general regression setting. This is based on joint work with my PhD student Bin Li from University of Illinois at Urbana-Champaign.

1pm: EMVS: The EM Approach to Bayesian Variable Selection by Edward George

Despite rapid developments in stochastic search algorithms, the practicality of Bayesian variable selection methods has continued to pose challenges. High-dimensional data are now routinely analyzed, typically with many more covariates than observations. To broaden the applicability of Bayesian variable selection for such high-dimensional linear regression contexts, we propose EMVS, a deterministic alternative to stochastic search based on an EM algorithm which exploits a conjugate mixture prior formulation to quickly find posterior modes. Combining a spike-and-slab regularization diagram for the discovery of active predictor sets with subsequent rigorous evaluation of posterior model probabilities, EMVS rapidly identifies promising sparse high posterior probability submodels. External structural information such as likely covariate groupings or network topologies is easily incorporated into the EMVS framework. Deterministic annealing variants are seen to improve the effectiveness of our algorithms by mitigating the posterior multi-modality associated with variable selection priors. The usefulness the EMVS approach is demonstrated on real high-dimensional data, where computational complexity renders stochastic search to be less practical. This is joint work with Veronika Rockova of Erasmus University)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: