Skip to yearly menu bar Skip to main content


Oral
in
Workshop: The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models

Bio-xLSTM: Generative modeling, representation and in-context learning of biological and chemical sequences

Niklas Schmidinger · Lisa Schneckenreiter · Philipp Seidl · Johannes Schimunek · Pieter-Jan Hoedt · Johannes Brandstetter · Andreas Mayr · Sohvi Luukkonen · Sepp Hochreiter · Günter Klambauer

Keywords: [ Efficient Solutions in other Modalities and Applications ]

[ ]
Sat 14 Dec 1:42 p.m. PST — 1:48 p.m. PST

Abstract:

Language models for biological and chemical sequences enable crucial applications such as drug discovery, protein engineering, and precision medicine. Currently, these language models are predominantly based on Transformer architectures. While Transformers have yielded impressive results, their quadratic runtime dependency on sequence length complicates their use for long genomic sequences and in-context learning on proteins and chemical sequences. Recently, the recurrent xLSTM architecture has been shown to perform favorably compared to Transformers and modern state-space models (SSMs) in the natural language domain. Similar to SSMs, xLSTMs have linear runtime dependency and allow for constant-memory decoding at inference time, which makes them prime candidates for modeling long-range dependencies in biological and chemical sequences. In this work, we tailor xLSTM towards these domains and we propose a suite of language models called Bio-xLSTM. Extensive experiments in three large domains, genomics, proteins, and chemistry, were performed to assess xLSTM's ability to model biological and chemical sequences. The results show that Bio-xLSTM is a highly proficient generative model for DNA, protein, and chemical sequences, learns rich representations, and can perform in-context learning for proteins and small molecules.

Chat is not available.