Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning in Structural Biology

Bio-xLSTM: Generative modeling, representation and in-context learning of biological and chemical sequences

Niklas Schmidinger · Lisa Schneckenreiter · Philipp Seidl · Johannes Schimunek · Sohvi Luukkonen · Pieter-Jan Hoedt · Johannes Brandstetter · Andreas Mayr · Sepp Hochreiter · Günter Klambauer


Abstract:

Language models for biological and chemical sequences enable crucial applications such as drug discovery, protein engineering, and precision medicine. Currently, these language models are predominantly based on Transformer architectures. However, Transformer-based language models are limited by their quadratic runtime dependency on context length. This dependency on the sequence length complicates the use of Transformer-based language models for genomic sequences and in-context learning on proteins and chemical sequences. To alleviate this quadratic dependency, state-space models (SSM) have been suggested, which have only linear dependency. However, in natural language modeling, SSMs have recently been outperformed by the novel xLSTM architecture, which also has a linear dependency on context length. So far, xLSTM has not been tailored and assessed for its ability to model biological and chemical sequences. In this work, we propose a suite of language models, called Bio-xLSTM, for biological and chemical sequences based on the recently introduced xLSTM architecture. xLSTM is a recurrent network architecture with linear runtime dependency on context size and thus allows for applications to DNA sequences and in-context learning tasks in other biochemical domains. We performed extensive experiments in three large domains: genomics, proteins, and chemistry. The results show that Bio-xLSTM is a highly proficient generative model for DNA, protein, and chemical sequences, learns rich representations, and can perform in-context learning for proteins and small molecules.

Chat is not available.