Archive for graphs

graphe, graphons, graphez !

Posted in Books, pictures, Statistics, University life with tags , , , , , , on December 3, 2018 by xi'an

clustering dynamical networks

Posted in pictures, Statistics, University life with tags , , , , , , , , , , on June 5, 2018 by xi'an


Yesterday I attended a presentation by Catherine Matias on dynamic graph structures, as she was giving a plenary talk at the 50th French statistical meeting, conveniently located a few blocks away from my office at ENSAE-CREST. In the nicely futuristic buildings of the EDF campus, which are supposed to represent cogs according to the architect, but which remind me more of these gas holders so common in the UK, at least in the past! (The E of EDF stands for electricity, but the original public company handled both gas and electricity.) This was primarily a survey of the field, which is much more diverse and multifaceted than I realised, even though I saw some recent developments by Antonietta Mira and her co-authors, as well as refereed a thesis on temporal networks at Ca’Foscari by Matteo Iacopini, which defence I will attend in early July. The difficulty in the approaches covered by Catherine stands with the amount and complexity of the latent variables induced by the models superimposed on the data. In her paper with Christophe Ambroise, she followed a variational EM approach. From the spectator perspective that is mine, I wondered at using ABC instead, which is presumably costly when the data size grows in space or in time. And at using tensor structures as in Mateo’s thesis. This reminded me as well of Luke Bornn’s modelling of basketball games following each player in real time throughout the game. (Which does not prevent the existence of latent variables.) But more vaguely and speculatively I also wonder at the meaning of the chosen models, which try to represent “everything” in the observed process, which seems doomed from the start given the heterogeneity of the data. While reaching my Keynesian pessimistic low- point, which happens rather quickly!, one could hope for projection techniques, towards reducing the dimension of the data of interest and of the parameter required by the model.

normalising constants of G-Wishart densities

Posted in Books, Statistics with tags , , , , , , on June 28, 2017 by xi'an

Abdolreza Mohammadi, Hélène Massam, and Gérard Letac arXived last week a paper on a new approximation of the ratio of two normalising constants associated with two G-Wishart densities associated with different graphs G. The G-Wishart is the generalisation of the Wishart distribution by Alberto Roverato to the case when some entries of the matrix are equal to zero, which locations are associated with the graph G. While enjoying the same shape as the Wishart density, this generalisation does not enjoy a closed form normalising constant. Which leads to an intractable ratio of normalising constants when doing Bayesian model selection across different graphs.

Atay-Kayis and Massam (2005) expressed the ratio as a ratio of two expectations, and the current paper shows that this leads to an approximation of the ratio of normalising constants for a graph G against the graph G augmented by the edge e, equal to

Γ(½{δ+d}) / 2 √π Γ(½{δ+d+1})

where δ is the degree of freedom of the G-Wishart and d is the number of minimal paths of length 2 linking the two end points of e. This is remarkably concise and provides a fast approximation. (The proof is quite involved, by comparison.) Which can then be used in reversible jump MCMC. The difficulty is obviously in evaluating the impact of the approximation on the target density, as there is no manageable available alternative to calibrate the approximation. In a simulation example where such an alternative is available, the error is negligible though.

untangling riddle

Posted in Statistics with tags , , on December 23, 2016 by xi'an

Boston1This week riddle is rather anticlimactic [my rewording]:

N wires lead from the top of a tower down to the ground floor. But they have become hopelessly tangled. The wires on the top floor are all labelled, from 1 to N. The wires on the bottom, however, are not.

You need to figure out which ground-floor wire’s end corresponds to which top-floor wire’s end. On the ground floor, you can tie two wire ends together, and you can tie as many of these pairs as you like. You can then walk to the top floor and use a circuit detector to check whether any two of the wires at the top of the tower are connected or not.

What is the smallest number of trips to the top of the tower you need to take in order to correctly label all the wires on the bottom?

Indeed, first, if N=2, the wires cannot be identified, if N=3, they can be identified in 2 steps, and if N=4, they can also be identified in 2 steps (only pair two wires first, which gives their values and the values of the other two up to a permutation, then pair one of each group). Now, in general,  it seems that, by attaching the wires two by two, one can successively cut the number of wires by two at each trip. This is due to the identification of pairs: if wires a and b are connected in one step, identifying the other end of a will automatically determine the other end of b if one keeps track of all connected pairs at the top. Hence, if N=2^k, in (k-2) steps, we are down to 4 wires and it takes an extra 2 steps to identify those 4, hence all the other wires. Which leads to identification in N trips. If N is not a power of 2, things should go faster as “left over” wires in the binary division can be paired with set-aside wires and hence identify 3 wires on that trip, then further along the way… Which leads to a 2^{\lfloor \log_2(N) \rfloor} number of trips maybe (or even less). Continue reading

chain event graphs [RSS Midlands seminar]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , on October 16, 2013 by xi'an

img_1836Last evening, I attended the RSS Midlands seminar here in Warwick. The theme was chain event graphs (CEG), As I knew nothing about them, it was worth my time listening to both speakers and discussing with Jim Smith afterwards. CEGs are extensions of Bayes nets with originally many more nodes since they start with the probability tree involving all modalities of all variables. Intensive Bayesian model comparison is then used to reduce the number of nodes by merging modalities having the same children or removing variables with no impact on the variable of interest. So this is not exactly a new Bayes net based on modality dummies as nodes (my original question). This is quite interesting, esp. in the first talk illustration of using missing value indicators as a supplementary variable (to determine whether or not data is missing at random). I also wonder how much of a connection there is with variable length Markov chains (either as a model or as a way to prune the tree). A last vague idea is a potential connection with lumpable Markov chains, a concept I learned from Kemeny & Snell (1960): a finite Markov chain is lumpable if by merging two or more of its states it remains a Markov chain. I do not know if this has ever been studied from a statistical point of view, i.e. testing for lumpability, but this sounds related to the idea of merging modalities of some variables in the probability tree…