Skip to yearly menu bar Skip to main content


Poster

Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion

Dongyang Li · Chen Wei · Shiying Li · Jiachen Zou · Quanying Liu

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

How to decode human vision through neural signals has attracted a long-standing interest in neuroscience and machine learning. Modern contrastive learning and generative models improved the performance of visual decoding and reconstruction based on functional Magnetic Resonance Imaging (fMRI). However, the high cost and low temporal resolution of fMRI limit their applications in brain-computer interfaces (BCIs), prompting a high need for visual decoding based on electroencephalography (EEG). In this study, we present an end-to-end EEG-based visual reconstruction zero-shot framework, consisting of a tailored brain encoder, called the Adaptive Thinking Mapper (ATM), which projects neural signals from different sources into the shared subspace as the clip embedding, and a two-stage EEG-to-image generation strategy. In stage one, EEG is embedded to align the high-level clip embedding, and then prior diffusion model refines EEG embedding into image priors. A blurry image also decoded from EEG for maintaining the low-level feature. In stage two, we input both the high-level clip embedding and the blurry image to a pre-trained diffusion model. Furthermore, we analyzed the impacts of different time windows and brain regions on decoding and reconstruction. The versatility of our framework is demonstrated in the magnetoencephalogram (MEG) data modality. The experimental results indicate that our EEG-based visual zero-shot framework achieves SOTA performance in classification, retrieval and reconstruction, highlighting the portability, low cost, and high temporal resolution of EEG, enabling a wide range of BCI applications. Our code is available at https://anonymous.4open.science/r/Visual_Reconstruction-AC56.

Live content is unavailable. Log in and register to view live content