Provable representation learning in deep learning
With Jason Lee (Princeton University)
Provable representation learning in deep learning
Deep representation learning seeks to learn a data representation that transfers to downstream tasks. In this talk, we study two forms of representation learning: supervised pre-training and self-supervised learning.
Supervised pre-training uses a large labeled source dataset to learn a representation, then trains a classifier on top of the representation. We prove that supervised pre-training can pool the data from all source tasks to learn a good representation which transfers to downstream tasks with few labeled examples.
Self-supervised learning creates auxiliary pretext tasks that do not require labeled data to learn representations. These pretext tasks are created solely using input features, such as predicting a missing image patch, recovering the colour channels of an image, or predicting missing words. Surprisingly, predicting this known information helps in learning a representation effective for downstream tasks. We prove that under a conditional independence assumption, self-supervised learning provably
learns representations.
- Speaker: Jason Lee (Princeton University)
- Friday 13 November 2020, 16:00–17:00
- Venue: https://maths-cam-ac-uk.zoom.us/j/92821218455?pwd=aHFOZWw5bzVReUNYR2d5OWc1Tk15Zz09.
- Series: Statistics; organiser: Dr Sergio Bacallado.