Fri 7:00 a.m. - 7:05 a.m.
|
Opening remarks
(
Opening remarks
)
>
link
SlidesLive Video
|
🔗
|
Fri 7:05 a.m. - 7:25 a.m.
|
Attention in Task-sets, Planning, and the Prefrontal Cortex
(
Invited talk
)
>
link
SlidesLive Video
|
Ida Momennejad
🔗
|
Fri 7:25 a.m. - 7:45 a.m.
|
Relating transformers to models and neural representations of the hippocampal formation
(
Invited talk
)
>
link
SlidesLive Video
|
James Whittington
🔗
|
Fri 7:45 a.m. - 8:05 a.m.
|
Eye Gaze in Human-Robot Collaboration
(
Invited talk
)
>
link
SlidesLive Video
|
Henny Admoni
🔗
|
Fri 8:05 a.m. - 8:25 a.m.
|
Attending to What's Not There
(
Invited talk
)
>
link
SlidesLive Video
|
Tobias Gerstenberg
🔗
|
Fri 8:25 a.m. - 8:35 a.m.
|
Foundations of Attention Mechanisms in Deep Neural Network Architectures
(
Spotlight
)
>
link
SlidesLive Video
|
Pierre Baldi · Roman Vershynin
🔗
|
Fri 8:35 a.m. - 8:45 a.m.
|
Is Attention Interpretation? A Quantitative Assessment On Sets
(
Spotlight
)
>
link
SlidesLive Video
|
Jonathan D. Haab · Nicolas Deutschmann · Maria Rodriguez Martinez
🔗
|
Fri 9:00 a.m. - 10:00 a.m.
|
Panel I (in-person)
(
Panel Discussion
)
>
link
SlidesLive Video
|
🔗
|
Fri 10:00 a.m. - 11:00 a.m.
|
Lunch
|
🔗
|
Fri 11:00 a.m. - 12:00 p.m.
|
Poster session + coffee break
(
Poster session
)
>
link
|
🔗
|
Fri 12:00 p.m. - 12:20 p.m.
|
Exploiting Human Interactions to Learn Human Attention
(
Invited talk
)
>
link
SlidesLive Video
|
Shalini De Mello
🔗
|
Fri 12:20 p.m. - 12:40 p.m.
|
BrainProp: How Attentional Processes in the Brain Solve the Credit Assignment Problem
(
Invited talk
)
>
link
SlidesLive Video
|
Pieter Roelfsema
🔗
|
Fri 12:40 p.m. - 1:00 p.m.
|
Attention as Interpretable Information Processing in Machine Learning Systems
(
Invited talk
)
>
link
SlidesLive Video
|
Erin Grant
🔗
|
Fri 1:00 p.m. - 1:20 p.m.
|
Accelerating human attention research via ML applied to smartphones
(
Invited talk
)
>
link
SlidesLive Video
|
Vidhya Navalpakkam
🔗
|
Fri 1:20 p.m. - 1:30 p.m.
|
Wide Attention Is The Way Forward For Transformers
(
Spotlight
)
>
link
SlidesLive Video
|
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins
🔗
|
Fri 1:30 p.m. - 1:40 p.m.
|
Fine-tuning hierarchical circuits through learned stochastic co-modulation
(
Spotlight
)
>
link
SlidesLive Video
|
Caroline Haimerl · Eero Simoncelli · Cristina Savin
🔗
|
Fri 1:40 p.m. - 1:50 p.m.
|
Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
(
Spotlight
)
>
link
SlidesLive Video
|
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang
🔗
|
Fri 2:00 p.m. - 3:00 p.m.
|
Poster session + coffee break
(
Poster session
)
>
link
|
🔗
|
Fri 3:00 p.m. - 3:55 p.m.
|
Panel II (virtual)
(
Panel discussion
)
>
link
SlidesLive Video
|
🔗
|
Fri 3:55 p.m. - 4:00 p.m.
|
Closing remarks
(
Closing remarks
)
>
|
🔗
|
-
|
Bounded logit attention: Learning to explain image classifiers
(
Poster
)
>
|
Thomas Baumhauer · Djordje Slijepcevic · Matthias Zeppelzauer
🔗
|
-
|
TDLR: Top Semantic-Down Syntactic Language Representation
(
Poster
)
>
|
Vipula Rawte · Megha Chakraborty · Kaushik Roy · Manas Gaur · Keyur Faldu · Prashant Kikani · Amit Sheth
🔗
|
-
|
Attention for Compositional Modularity
(
Poster
)
>
|
Oleksiy Ostapenko · Pau Rodriguez · Alexandre Lacoste · Laurent Charlin
🔗
|
-
|
Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks
(
Poster
)
>
|
Yuxuan Li · James McClelland
🔗
|
-
|
The Paradox of Choice: On the Role of Attention in Hierarchical Reinforcement Learning
(
Poster
)
>
|
Andrei Nica · Khimya Khetarpal · Doina Precup
🔗
|
-
|
FuzzyNet: A Fuzzy Attention Module for Polyp Segmentation
(
Poster
)
>
|
Krushi Patel · Guanghui Wang · Fengjun Li
🔗
|
-
|
Is Attention Interpretation? A Quantitative Assessment On Sets
(
Poster
)
>
|
Jonathan D. Haab · Nicolas Deutschmann · Maria Rodriguez Martinez
🔗
|
-
|
Wide Attention Is The Way Forward For Transformers
(
Poster
)
>
|
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins
🔗
|
-
|
Attention as inference with third-order interactions
(
Poster
)
>
|
Yicheng Fei · Xaq Pitkow
🔗
|
-
|
Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
(
Poster
)
>
|
Michael Chang · Alyssa L Dayan · Franziska Meier · Tom Griffiths · Sergey Levine · Amy Zhang
🔗
|
-
|
Improving cross-modal attention via object detection
(
Poster
)
>
|
Yongil Kim · Yerin Hwang · Seunghyun Yoon · HyeonGu Yun · Kyomin Jung
🔗
|
-
|
Graph Attention for Spatial Prediction
(
Poster
)
>
|
Corban Rivera · Ryan Gardner
🔗
|
-
|
Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers
(
Poster
)
>
|
Alexander Wong · Mohammad Javad Shafiee · Saad Abbasi · Saeejith Nair · Mahmoud Famouri
🔗
|
-
|
Fine-tuning hierarchical circuits through learned stochastic co-modulation
(
Poster
)
>
|
Caroline Haimerl · Eero Simoncelli · Cristina Savin
🔗
|
-
|
First De-Trend then Attend: Rethinking Attention for Time-Series Forecasting
(
Poster
)
>
|
Xiyuan Zhang · Xiaoyong Jin · Karthick Gopalswamy · Gaurav Gupta · Youngsuk Park · Xingjian Shi · Hao Wang · Danielle Maddix · Yuyang (Bernie) Wang
🔗
|
-
|
Quantifying attention via dwell time and engagement in a social media browsing environment
(
Poster
)
>
|
Ziv Epstein · Hause Lin · Gordon Pennycook · David Rand
🔗
|
-
|
Revisiting Attention Weights as Explanations from an Information Theoretic Perspective
(
Poster
)
>
|
Bingyang Wen · Koduvayur (Suba) Subbalakshmi · Fan Yang
🔗
|
-
|
Foundations of Attention Mechanisms in Deep Neural Network Architectures
(
Poster
)
>
|
Pierre Baldi · Roman Vershynin
🔗
|
-
|
Unlocking Slot Attention by Changing Optimal Transport Costs
(
Poster
)
>
|
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek
🔗
|