Skip to yearly menu bar Skip to main content


Poster

Implicit Regularization Paths of Weighted Neural Representations

Jin-Hong Du · Pratik Patil

[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Pretrained features extracted from neural networks have proven to be powerful representations for various downstream tasks. However, the high dimensionality of these features, such as neural tangent kernel representations, often poses computational and memory challenges in model fitting. In this paper, we investigate the implicit regularization effects of weighted pretrained features, including subsampling as a special case to alleviate the computational and memory burden. By characterizing the implicit regularization due to weighted regression, we derive a path of equivalence connecting different weighting matrices and ridge regularization with matching effective degrees of freedom. For the special case of subsampling without replacement, our results apply to both random features and kernel features, resolving recent conjectures posed by Patil and Du (2023). Furthermore, we present a risk decomposition for an ensemble of weighted estimators and demonstrate that the risks are equivalent along the path for the full ensembles. Finally, for tuning in practice, we also develop an efficient cross-validation method and apply it to subsampled pretrained representations across several models (e.g., ResNet-50) and datasets (e.g., CIFAR-100), which confirms our theoretical findings and highlights the practical implications of the induced implicit regularization.

Live content is unavailable. Log in and register to view live content