Archive for ranking and selection

how a hiring quota failed [or not]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on February 26, 2019 by xi'an

This week, Nature has a “career news” section dedicated to how hiring quotas [may have] failed for French university hiring. And based solely on a technical report by a Science Po’ Paris researcher. The hiring quota means that every hiring committee for a French public university hiring committee must be made of at least 40% members of each gender.  (Plus at least 50% of external members.) Which has been reduced to 30% in some severely imbalanced fields like mathematics. The main conclusion of the report is that the reform has had a negative impact on the hiring imbalance between men and women in French universities, with “the higher the share of women in a committee, the lower women are ranked” (p.2). As head of the hiring board in maths at Dauphine, which officiates as a secretarial committee for assembling all hiring committee, I was interested in the reasons for this perceived impact, as I had not observed it at my [first order remote] level. As a warning the discussion that follows makes little sense without a prior glance at the paper.

“Deschamps estimated that without the reform, 21 men and 12 women would have been hired in the field of mathematics. But with the reform, committees whose membership met the quota hired 30 men and 3 women” Nature

Skipping the non-quantitative and somewhat ideological part of the report, as well as descriptive statistics, I looked mostly at the modelling behind the conclusions, as reported for instance in the above definite statement in Nature. Starting with a collection of assumptions and simplifications. A first dubious such assumption is that fields and even less universities where the more than 40% quota was already existing before (the 2015 reform) could be used as “control groups”, given the huge potential for confounders, especially the huge imbalance in female-to-male ratios in diverse fields. Second, the data only covers hiring histories for three French universities (out of 63 total) over the years 2009-2018 and furthermore merges assistant (Maître de Conférence) and full professors, where hiring is de facto much more involved, with often one candidate being contacted [prior to the official advertising of the position] by the department as an expression of interest (or the reverse). Third, the remark that

“there are no significant differences between the percentage of women who apply and those who are hired” (p.9)

seems to make the all discussion moot… and contradict both the conclusion and the above assertion! Fourth, the candidate’s qualification (or quality) is equated with the h-index, which is highly reductive and, once again, open to considerable biases in terms of seniority degree and of field. Depending on the publication lag and also the percentage of publications in English versus the vernacular in the given field. And the type of publications (from an average of 2.94 in business to 9.96 on physics]. Fifth, the report equates academic connections [that may bias the ranking] with having the supervisor present in the hiring committee [which sounds like a clear conflict of interest] or the candidate applying in the [same] university that delivered his or her PhD. Missing a myriad of other connections that make committee members often prone to impact the ranking by reporting facts from outside the application form.

“…controlling for field fixed effects and connections make the coefficient [of the percentage of women in the committee] statistically insignificant, though the point estimate remains high.” (p.17)

The models used by Pierre Deschamps are multivariate logit and probit regressions, where each jury attaches a utility to each of its candidates, made of a qualification term [for the position] and of a gender bias most surprisingly multiplying candidate gender and jury gender dummies. The qualification term is expressed as a [jury free] linear regression on covariates plus a jury fixed effect. Plus an error distributed as a Gumbel extreme variate that leads to a closed-form likelihood [and this seems to be the only reason for picking this highly skewed distribution]. The probit model is used to model the probability that one candidate has a better utility than another. The main issue with this modelling is the agglomeration of independence assumptions, as (i) candidates and hired ones are not independent, from being evaluated over several positions all at once, with earlier selections and rankings all public, to having to rank themselves all the positions where they are eligible, to possibly being co-authors of other candidates; (ii) jurys are not independent either, as the limited pool of external members, esp. in gender-imbalanced fields, means that the same faculty often ends up in several jurys at once and hence evaluates the same candidates as a result, plus decides on local ranking in connection with earlier rankings; (iii) independence between several jurys of the same university when this university may try to impose a certain if unofficial gender quota, a variate obviously impossible to fill . Plus again a unique modelling across disciplines. A side but not solely technical remark is that among the covariates used to predict ranking or first position for a female candidate, the percentage of female candidates appears, while being exogenous. Again, using a univariate probit to predict the probability that a candidate is ranked first ignores the comparison between a dozen candidates, both male and female, operated by the jury. Overall, I find little reason to give (significant) weight to the indicator that the president is a woman in the logistic regression and even less to believe that a better gender balance in the jurys has led to a worse gender balance in the hirings. From one model to the next the coefficients change from being significant to non-significant and, again, I find the definition of the control group fairly crude and unsatisfactory, if only because jurys move from one session to the next (and there is little reason to believe one field more gender biased than another, with everything else accounted for). And for another my own experience within hiring committees in Dauphine or elsewhere has never been one where the president strongly impacts the decision. If anything, the president is often more neutral (and never ever imoe makes use of the additional vote to break ties!)…

selected parameters from observations

Posted in Books, Statistics with tags , , , , , , , on December 7, 2018 by xi'an

I recently read a fairly interesting paper by Daniel Yekutieli on a Bayesian perspective for parameters selected after viewing the data, published in Series B in 2012. (Disclaimer: I was not involved in processing this paper!)

The first example is to differentiate the Normal-Normal mean posterior when θ is N(0,1) and x is N(θ,1) from the restricted posterior when θ is N(0,1) and x is N(θ,1) truncated to (0,∞). By restating the later as the repeated generation from the joint until x>0. This does not sound particularly controversial, except for the notion of selecting the parameter after viewing the data. That the posterior support may depend on the data is not that surprising..!

“The observation that selection affects Bayesian inference carries the important implication that in Bayesian analysis of large data sets, for each potential parameter, it is necessary to explicitly specify a selection rule that determines when inference  is provided for the parameter and provide inference that is based on the selection-adjusted posterior distribution of the parameter.” (p.31)

The more interesting distinction is between “fixed” and “random” parameters (Section 2.1), which separate cases where the data is from a truncated distribution (given the parameter) and cases where the joint distribution is truncated but misses the normalising constant (function of θ) for the truncated sampling distribution. The “mixed” case introduces an hyperparameter λ and the normalising constant integrates out θ and depends on λ. Which amounts to switching to another (marginal) prior on θ. This is quite interesting even though one can debate of the very notions of “random” and “mixed” “parameters”, which are those where the posterior most often changes, as true parameters. Take for instance Stephen Senn’s example (p.6) of the mean associated with the largest observation in a Normal mean sample, with distinct means. When accounting for the distribution of the largest variate, this random variable is no longer a Normal variate with a single unknown mean but it instead depends on all the means of the sample. Speaking of the largest observation mean is therefore misleading in that it is neither the mean of the largest observation, nor a parameter per se since the index [of the largest observation] is a random variable induced by the observed sample.

In conclusion, a very original article, if difficult to assess as it can be argued that selection models other than the “random” case result from an intentional modelling choice of the joint distribution.

 

spacings on a torus

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , on March 22, 2018 by xi'an

While in Brussels last week I noticed an interesting question on X validated that I considered in the train back home and then more over the weekend. This is a question about spacings, namely how long on average does it take to cover an interval of length L when drawing unit intervals at random (with a torus handling of the endpoints)? Which immediately reminded me of Wilfrid Kendall (Warwick) famous gif animation of coupling from the past via leaves covering a square region, from the top (forward) and from the bottom (backward)…

The problem is rather easily expressed in terms of uniform spacings, more specifically on the maximum spacing being less than 1 (or 1/L depending on the parameterisation). Except for the additional constraint at the boundary, which is not independent of the other spacings. Replacing this extra event with an independent spacing, there exists a direct formula for the expected stopping time, which can be checked rather easily by simulation. But the exact case appears to be add a few more steps to the draws, 3/2 apparently. The following graph displays the regression of the Monte Carlo number of steps over 10⁴ replicas against the exact values:

no country for old biases

Posted in Books, Kids, Statistics with tags , , , , , on March 20, 2018 by xi'an

Following a X validated question, I read a 1994 paper by Phil Dawid on the selection paradoxes in Bayesian statistics, which first sounded like another version of the stopping rule paradox. And upon reading, less so. As described above, the issue stands with drawing inference on the index and value, (i⁰,μ⁰), of the largest mean of a sample of Normal rvs. What I find surprising in Phil’s presentation is that the Bayesian analysis does not sound that Bayesian. If given the whole sample, a Bayesian approach should produce a posterior distribution on (i⁰,μ⁰), rather than follow estimation steps, from estimating i⁰ to estimating the associated mean. And if needed, estimators should come from the definition of a particular loss function. When, instead, given the largest point in the sample, and only that point, its distribution changes, so I am fairly bemused by the statement that no adjustment is needed.

The prior modelling is also rather surprising in that the priors on the means should be joint rather than a product of independent Normals, since these means are compared and hence comparable. For instance a hierarchical prior seems more appropriate, with location and scale to be estimated from the whole data. Creating a connection between the means… A relevant objection to the use of independent improper priors is that the maximum mean μ⁰ then does not have a well-defined measure. However, I do not think a criticism of some priors versus other is a relevant attack on this “paradox”.

Who’s #1?

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , , , , on May 2, 2012 by xi'an

First, apologies for this teaser of a title! This post is not about who is #1 in whatever category you can think of, from statisticians to climbs [the Eiger Nordwand, to be sure!], to runners (Gebrselassie?), to books… (My daughter simply said “c’est moi!” when she saw the cover of this book on my desk.) So this is in fact a book review of…a book with this catching title I received a month or so ago!

We decided to forgo purely statistical methodology, which is probably a disappointment to the hardcore statisticians.” A.N. Langville & C.D. Meyer, Who’s #1? The Science of Rating and Ranking (page 225)

This book may be one of the most boring ones I have had to review so far! The reason for this disgruntled introduction to “Who’s #1? The Science of Rating and Ranking” by Langville and Meyer is that it has very little if any to do with statistics and modelling. (And also that it is mostly about American football, a sport I am not even remotely interested in.) The purpose of the book is to present ways of building rating and ranking within a population, based on pairwise numerical connections between some members of this population. The methods abound, at least eight are covered by the book, but they all suffer from the same drawback that they are connected to no grand truth, to no parameter from an underlying probabilistic model, to no loss function that would measure the impact of a “wrong” rating. (The closer it comes to this is when discussing spread betting in Chapter 9.) It is thus a collection of transformation rules, from matrices to ratings. I find this the more disappointing in that there exists a branch of statistics called ranking and selection that specializes in this kind of problems and that statistics in sports is a quite active branch of our profession, witness the numerous books by Jim Albert. (Not to mention Efron’s analysis of baseball data in the 70’s.)

First suppose that in some absolutely perfect universe there is a perfect rating vector.” A.N. Langville & C.D. Meyer, Who’s #1? The Science of Rating and Ranking (page 117)

The style of the book is disconcerting at first, and then some, as it sounds written partly from Internet excerpts (at least for most of the pictures) and partly from local student dissertations… The mathematical level is highly varying, in that the authors take the pain to define what a matrix is (page 33), only to jump to Perron-Frobenius theorem a few pages later (page 36). It also mentions Laplace’s succession rule (only justified as a shrinkage towards the center, i.e. away from 0 and 1), the Sinkhorn-Knopp theorem, the traveling salesman problem, Arrow and Condorcet, relaxation and evolutionary optimization, and even Kendall’s and Spearman’s rank tests (Chapter 16), even though no statistical model is involved. (Nothing as terrible as the completely inappropriate use of Spearman’s rho coefficient in one of Belfiglio’s studies…)

Since it is hard to say which ranking is better, our point here is simply that different methods can produce vastly different rankings.” A.N. Langville & C.D. Meyer, Who’s #1? The Science of Rating and Ranking (page 78)

I also find irritating the association of “science” with “rating”, because the techniques presented in this book are simply tricks to turn pairwise comparison into a general ordering of a population, nothing to do with uncovering ruling principles explaining the difference between the individuals. Since there is no validation for one ordering against another, we can see no rationality in proposing any of those, except to set a convention. The fascination of the authors for the Markov chain approach to the ranking problem is difficult to fathom as the underlying structure is not dynamical (there is not evolving ranking along games in this book) and the Markov transition matrix is just constructed to derive a stationary distribution, inducing a particular “Markov” ranking.

The Elo rating system is the epitome of simple elegance.” A.N. Langville & C.D. Meyer, Who’s #1? The Science of Rating and Ranking (page 64)

An interesting input of the book is its description of the Elo ranking system used in chess, of which I did not know anything apart from its existence. Once again, there is a high degree of arbitrariness in the construction of the ranking, whose sole goal is to provide a convention upon which most people agree. A convention, mind, not a representation of truth! (This chapter contains a section on the Social Network movie, where a character writes a logistic transform on a window, missing the exponent. This should remind Andrew of someone he often refer to in his blog!)

Perhaps the largest lesson is not to put an undue amount of faith in anyone’s rating.” A.N. Langville & C.D. Meyer, Who’s #1? The Science of Rating and Ranking (page 125)

In conclusion, I see little point in suggesting reading this book, unless one is interested in matrix optimization problems and/or illustrations in American football… Or unless one wishes to write a statistics book on the topic!