Poster
Transformer Efficiently Learns Low-dimensional Target Functions In-context
Kazusato Oko · Yujin Song · Taiji Suzuki · Denny Wu
[
Abstract
]
Fri 13 Dec 4:30 p.m. PST
— 7:30 p.m. PST
Abstract:
Transformers can efficiently learn in-context from example demonstrations. Most existing theoretical analyses studied the in-context learning (ICL) ability of transformers for linear function classes, where it is typically shown that the minimizer of the pretraining loss implements one gradient descent step on the least squares objective. However, this simplified linear setting arguable does not demonstrate the statistical efficiency of ICL, since the trained transformer does not outperform directly doing linear regression on the test prompt. We study ICL of a nonlinear function class via transformer with nonlinear MLP layer: given a class of single-index target functions $f_*(x) = \sigma_*(\langle x,\beta\rangle)$, where the index features $\beta\in\mathbb{R}^d$ are drawn from a rank-$r$ subspace, we show that a nonlinear transformer optimized by gradient descent on the empirical loss learns $f_*$ in-context with a prompt length that only depends on the dimension of function class $r$; in contrast, an algorithm that directly learns $f_*$ on test prompt yields a statistical complexity that scales with the ambient dimension $d$. Our result highlights the adaptivity of ICL to low-dimensional structures of the function class.
Live content is unavailable. Log in and register to view live content