Posterior sampling via autoregressive generation

With Kelly Zhang (Imperial College London)

Posterior sampling via autoregressive generation

Uncertainty quantification remains a critical challenge when using deep learning models, particularly in complex decision-making settings. We propose a new framework for learning bandit algorithms from massive historical data, by combining classical ideas from multiple imputation with autoregressive generative sequence modeling. We demonstrate our approach in a cold-start recommendation problem where, first, we use historical data to pretrain an autoregressive model to predict sequences of repeated feedback/rewards (e.g., responses to news articles shown to different users over time). In learning to make accurate predictions, the model implicitly learns an informed prior based on rich action features (e.g., article headlines) and how to sharpen beliefs as more rewards are gathered (e.g., clicks as each article is recommended). At decision-time, the algorithm autoregressively samples (imputes) a hypothetical sequence of rewards for each action and chooses the action with the largest average imputed reward. Far from a heuristic, our approach is an implementation of Thompson sampling (with a learned prior), a prominent active exploration algorithm. We prove our pretraining sequence loss directly controls online decision-making performance, and we demonstrate our framework on a news recommendation task where we integrate end-to-end fine-tuning of a pretrained language model to process news article headline text to improve performance.

Add to your calendar or Include in your list