Implicit Regularization for Optimal Sparse Recovery
With Varun Kanade, University of Oxford
Implicit Regularization for Optimal Sparse Recovery
We present an implicit regularization scheme for gradient descent methods
applied to unpenalized least squares regression to solve the problem of
reconstructing a sparse signal from an underdetermined system of linear
measurements under the restricted isometry assumption. For a given
parameterization yielding a non-convex optimization problem, we show that
prescribed choices of initialization, step size and stopping time yield a
statistically and computationally optimal algorithm that achieves the minimax
rate with the same cost required to read the data up to poly-logarithmic
factors. Beyond minimax optimality, we show that our algorithm adapts to
instance difficulty and yields a dimension-independent rate when the
signal-to-noise ratio is high enough. We validate our findings with numerical
experiments and compare our algorithm against explicit $ell_1$ penalization.
Going from hard instances to easy ones, our algorithm is seen to undergo a
phase transition, eventually matching least squares with an oracle knowledge of
the true support.
(based on joint work with Patrick Rebeschini and Tomas Vaskevicius)
- Speaker: Varun Kanade, University of Oxford
- Friday 29 May 2020, 14:00–15:00
- Venue: https://zoom.us/j/95022384263?pwd=N3Z6elB2Vy9Jajd6azlCNjFHQVlKdz09.
- Series: Statistics; organiser: Dr Sergio Bacallado.