Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

Investigating the role of modality and training objective on representational alignment between transformers and the brain

Willow Han · Ruchira Dhar · Qingqing Yang · Maryam Behbahani · Maria Alejandra Martinez Ortiz · Tolulope Oladele · Diana C Dima · Hsin-Hung Li · Anders Søgaard · Yalda Mohsenzadeh

Keywords: [ training objective ] [ modality ] [ representational alignment ] [ transformer ] [ fMRI ]


Abstract:

The remarkable performance of transformer models in both linguistic and real-world reasoning tasks coupled with their ubiquitous use has prompted much research on their alignment with brain activations. However, there remain some unanswered questions: what aspects of these models lead to representational alignment- the input modality or the training objective? Moreover, is the alignment limited to modality-specialized brain regions, or can representations align with brain regions involved in higher cognitive functions? To address this, we analyze the representations of different transformer architectures, including text-based and vision-based language models, and compare them with neural representations across multiple brain regions obtained during a visual processing task. Our findings reveal that both training data modality and training objective are important in determining alignment, and that models align with neural representations within and beyond the modality-specific regions. Additionally, the training modality and objectives seem to have an impact on alignment quality as we progress through the layers, suggesting that multimodal data along with a predictive processing objective may confer superior representational capabilities compared to other training objectives.

Chat is not available.