Archive for Bayesian econometrics

Roberto Casarin in Warwick [joint Stats/Econometrics seminar series]

Posted in Statistics with tags , , , , , , , on February 11, 2020 by xi'an

My friend, coauthor and former student Roberto Casarin (da Ca’Foscari Venezia) is giving a talk tomorrow in Warwick:

Bayesian Dynamic Tensor Regression (joint with Billio, M., Iacopini, M., and Kaufmann, S.)

Tensor-valued data (i.e. multidimensional data) are becoming increasingly available and call for suitable econometric tools. We propose a new dynamic linear regression model for tensor-valued response variables and covariates that encompasses some well-known multivariate models as special cases. We exploit the PARAFAC low-rank decomposition for providing a parsimonious parametrization and to incorporate sparsity effects. Our contribution is twofold: first, we extend multivariate econometric models to account for tensor-valued response and covariates; second, we define a tensor autoregressive process (TAR) and the associated impulse response function for studying shock propagation. Inference is carried out in the Bayesian framework combined with Monte Carlo Markov Chain (MCMC). We apply the TAR model for studying time-varying multilayer economic networks concerning international trade and international capital stocks. We provide an impulse response analysis for assessing propagation of trade and financial shocks across countries, over time and between layers.

The seminar will take place on Thursday Feb. 13 at 14:00 in OC0.01 (Oculus), University of Warwick, Coventry, UK.

Bayesian econometrics in St. Andrews

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , on April 8, 2019 by xi'an
A call I received for the incoming 2019 edition of the European Seminar on Bayesian Econometrics (ESOBE), sponsored by the EFaB section of ISBA, which is going to be held at the University of St Andrews in Scotland on Monday 2 and Tuesday 3 September, 2019. I have attended an earlier edition in Venezia and enjoyed it very much. Plus, summer in Scotland…, where else?! Submission of papers is still open:
We aim to have a balance of keynotes from both statistics and econometrics, in order to stimulate submissions from statisticians working on Bayesian methodology or applications in economics/finance. We particularly welcome submissions from young Bayesians (PhDs, PostDocs, assistant professors — EFaB funds a “young researcher session” with up to $500 per speaker).

Roberto Casarin’s talk at CREST tomorrow

Posted in Statistics with tags , , , , , , , , , , , on March 13, 2019 by xi'an

My former student and friend Roberto Casarin (University Ca’Foscari, Venice) will talk tomorrow at the CREST Financial Econometrics seminar on

“Bayesian Markov Switching Tensor Regression for Time-varying Networks”

Time: 10:30
Date: 14 March 2019
Place: Room 3001, ENSAE, Université Paris-Saclay

Abstract : We propose a new Bayesian Markov switching regression model for multi-dimensional arrays (tensors) of binary time series. We assume a zero-inflated logit dynamics with time-varying parameters and apply it to multi-layer temporal networks. The original contribution is threefold. First, in order to avoid over-fitting we propose a parsimonious parameterisation of the model, based on a low-rank decomposition of the tensor of regression coefficients. Second, the parameters of the tensor model are driven by a hidden Markov chain, thus allowing for structural changes. The regimes are identified through prior constraints on the mixing probability of the zero-inflated model. Finally, we model the jointly dynamics of the network and of a set of variables of interest. We follow a Bayesian approach to inference, exploiting the Pólya-Gamma data augmentation scheme for logit models in order to provide an efficient Gibbs sampler for posterior approximation. We show the effectiveness of the sampler on simulated datasets of medium-big sizes, finally we apply the methodology to a real dataset of financial networks.

asymptotics of synthetic likelihood

Posted in pictures, Statistics, Travel with tags , , , , , , , , , , on March 11, 2019 by xi'an

David Nott, Chris Drovandi and Robert Kohn just arXived a paper on a comparison between ABC and synthetic likelihood, which is both interesting and timely given that synthetic likelihood seems to be lacking behind in terms of theoretical evaluation. I am however as puzzled by the results therein as I was by the earlier paper by Price et al. on the same topic. Maybe due to the Cambodia jetlag, which is where and when I read the paper.

My puzzlement, thus, comes from the difficulty in comparing both approaches on a strictly common ground. The paper first establishes convergence and asymptotic normality for synthetic likelihood, based on the 2003 MCMC paper of Chernozukov and Hong [which I never studied in details but that appears like the MCMC reference in the econometrics literature]. The results are similar to recent ABC convergence results, unsurprisingly when assuming a CLT on the summary statistic vector. One additional dimension of the paper is to consider convergence for a misspecified covariance matrix in the synthetic likelihood [and it will come back with a revenge]. And asymptotic normality of the synthetic score function. Which is obviously unavailable in intractable models.

The first point I have difficulty with is how the computing time required for approximating mean and variance in the synthetic likelihood, by Monte Carlo means, is not accounted for in the comparison between ABC and synthetic likelihood versions. Remember that ABC only requires one (or at most two) pseudo-samples per parameter simulation. The latter requires M, which is later constrained to increase to infinity with the sample size. Simulations that are usually the costliest in the algorithms. If ABC were to use M simulated samples as well, since it already relies on a kernel, it could as well construct [at least on principle] a similar estimator of the [summary statistic] density. Or else produce M times more pairs (parameter x pseudo-sample). The authors pointed out (once this post out) that they do account for the factor M when computing the effective sample size (before Lemma 4, page 12), but I still miss why the ESS converging to N=MN/M when M goes to infinity is such a positive feature.

Another point deals with the use of multiple approximate posteriors in the comparison. Since the approximations differ, it is unclear that convergence to a given approximation is all that should matter, if the approximation is less efficient [when compared with the original and out-of-reach posterior distribution]. Especially for a finite sample size n. This chasm in the targets becomes more evident when the authors discuss the use of a constrained synthetic likelihood covariance matrix towards requiring less pseudo-samples, i.e. lower values of M, because of a smaller number of parameters to estimate. This should be balanced against the loss in concentration of the synthetic approximation, as exemplified by the realistic examples in the paper. (It is also hard to see why M could be not of order √n for Monte Carlo reasons.)

The last section in the paper is revolving around diverse issues for misspecified models, from wrong covariance matrix to wrong generating model. As we just submitted a paper on ABC for misspecified models, I will not engage into a debate on this point but find the proposed strategy that goes through an approximation of the log-likelihood surface by a Gaussian process and a derivation of the covariance matrix of the score function apparently greedy in both calibration and computing. And not so clearly validated when the generating model is misspecified.

auxiliary likelihood ABC in print

Posted in Statistics with tags , , , , , , , , on March 1, 2019 by xi'an

Our paper with Gael Martin, Brendan McCabe , David Frazier and Worapree Maneesoonthorn, with full title Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models, has now appeared in JCGS. To think that it started in Rimini in 2009, when I met Gael for the first time at the Rimini Bayesian Econometrics conference, although we really started working on the paper in 2012 when I visited Monash makes me realise the enormous investment we made in this paper, especially by Gael whose stamina and enthusiasm never cease to amaze me!