Skip to yearly menu bar Skip to main content


Poster

How Do Transformers Fill in the Blanks? A Case Study on Matrix Completion

Pulkit Gopalani · Ekdeep S Lubana · Wei Hu

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Completing masked sequences is an important problem in language modeling, and analyzing how Transformer models perform this task is crucial for understanding their mechanisms. In this direction, we formulate the low-rank matrix completion problem as a masked language modeling (MLM) task, and train a BERT model to solve this task. We find that BERT succeeds in matrix completion and outperforms the classical nuclear norm minimization method. Moreover, the loss curve displays an early plateau followed by a sudden drop to near-optimal values, despite no changes in the training procedure or hyper-parameters. To gain interpretability insights, we examine the model's predictions, attention heads, and hidden states before and after this transition. Concretely, we observe that (i) the model transitions from simply copying the masked input to accurately predicting the masked entries; (ii) the attention heads transition to interpretable patterns relevant to the task; and (iii) the embeddings and hidden states encode information relevant to the problem.

Live content is unavailable. Log in and register to view live content