## double descent Last Friday, I [and a few hundred others!] went to the SMILE (Statistical Machine Learning in Paris) seminar where Francis Bach was giving a talk. (With a pleasant ride from Dauphine along the Seine river.) Fancis was talking about the double descent phenomenon observed in recent papers by Belkin & al. (2018, 2019), and Mei & Montanari (2019). (As the seminar room at INRIA was quite crowded and as I was sitting X-legged on the floor close to the screen, I took a few slides from below!) The phenomenon is that the usual U curve warning about over-fitting and reproduced in most statistics and machine-learning courses can under the right circumstances be followed by a second decrease in the testing error when the number of features goes beyond the number of observations. This is rather puzzling and counter-intuitive, so I briefkly checked the 2019 [8 pages] article by Belkin & al., who are studying two examples, including a standard “large p small n” Gaussian regression. where the authors state that

“However, as p grows beyond n, the test risk again decreases, provided that the model is fit using a suitable inductive bias (e.g., least norm solution). “

One explanation [I found after checking the paper] is that the variates (features) in the regression are selected at random rather than in an optimal sequential order. Double descent is missing with interpolating and deterministic estimators. Hence requiring on principle all candidate variates to be included to achieve minimal averaged error. The infinite spike is when the number p of variate is near the number n of observations. (The expectation accounts as well for the randomisation in T. Randomisation that remains an unclear feature in this framework…) ### One Response to “double descent”

1. samuelwxy Says:

I’ve also experienced similar phenomenon before when doing linear regression (or ridge regression with a small tuning parameter). The worst result is always achieved when n=p. My understanding is that the design matrix (X’X) has the worst condition number when n=p. From random matrix theory perspective, the singular value of the design matrix is lower bounded by max(p-n, n-p), which arrives at 0 when n=p. A bad condition number implies high variance in optimization and worse result. The dimension difference (regardless of large p or n) can actually induce a better condition number for the optimization problem.

This site uses Akismet to reduce spam. Learn how your comment data is processed.