A precise high-dimensional asymptotic theory for Adaboost
With Pragya Sur (Harvard University)
A precise high-dimensional asymptotic theory for Adaboost
This talk will introduce a precise high-dimensional asymptotic theory for AdaBoost on separable data, taking both statistical and computational perspectives. We will consider the common modern setting where the number of features p and the sample size n are both large and comparable, and in particular, look at scenarios where the data is separable in an asymptotic sense. Under a class of statistical models, we will provide an (asymptotically) exact analysis of the generalization error of AdaBoost, when the algorithm interpolates the training data and maximizes an empirical L1 margin. On the computational front, we provide a sharp analysis of the stopping time when boosting approximately maximizes the empirical L1 margin. Our theory provides several insights into properties of Boosting; for instance, the larger the dimensionality ratio p/n, the faster the optimization reaches interpolation. At the heart of our theory lies an in-depth study of the maximum L1-margin, which can be accurately described by a new system of non-linear equations; we analyze this margin and the properties of this system, using Gaussian comparison techniques and a novel uniform deviation argument. Time permitting, I will present a new class of boosting algorithms that correspond to Lq geometry, for q>1, together with results on their high-dimensional generalization and optimization behavior.
This is based on joint work with Tengyuan Liang.
- Speaker: Pragya Sur (Harvard University)
- Friday 22 January 2021, 16:00–17:00
- Venue: https://maths-cam-ac-uk.zoom.us/j/92821218455?pwd=aHFOZWw5bzVReUNYR2d5OWc1Tk15Zz09.
- Series: Statistics; organiser: Dr Sergio Bacallado.