Poster
in
Workshop: NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences
Learning Dark Matter Representation from Strong Lensing Images through Self-Supervision
Yashwardhan Deshmukh · Kartik Sachdev · Michael Toomey · Sergei Gleyzer
Gravitational lensing is one of the most important probes of dark matter and has recently seen a surge in applications of machine learning techniques. This is typically studied in the context of supervised learning, but given the upcoming influx of gravitational lensing data from Euclid and LSST, manual labeling for deep learning tasks has become an unsustainable approach. To address this challenge, self-supervised learning (SSL) emerges as a scalable solution. By leveraging unlabeled strong lensing data to learn feature representations, self-supervised models have the potential to enhance our understanding of dark matter via the effect of its substructure in strong lensing images. This work implements contrastive learning, Bootstrap Your Own Latent (BYOL), Simple Siamese (SimSiam), and self-distillation with no labels (DINO) using ResNet50 and Vision Transformer (ViT) networks, to acquire unsupervised embeddings for strong lensing images simulated for different dark matter models: ultra-light axions, cold dark matter, and halos without substructure. The learned representations of the encoder are fine-tuned using supervision and applied to classification and regression tasks which are also benchmarked against a fully supervised, ResNet50 baseline. Our results show that the self-supervised methods can consistently outperform their supervised counterparts.