Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Time Series in the Age of Large Models

PRIMUS: Pretraining IMU Encoders with Multimodal and Self-Supervised Learning

Arnav Das · Chi Ian Tang · Fahim Kawsar · Mohammad Malekzadeh


Abstract:

Sensing human body motions through Inertial Measurement Units (IMUs) time series in personal devices has enabled significant applications in health and wellness. While labeled IMU data is scarce, we can collect unlabeled or weakly labeled IMU data to model human motions. For video or text modalities, the `pretrain and adapt' approach utilizes large volumes of unlabeled or weakly labeled data for pretraining, building a strong feature extractor, followed by adaptation to specific tasks using limited labeled data. However, for IMU data, pretraining methods are not well understood, and open-source pretrained models that generalize across datasets are rarely publicly available. We propose PRIMUS: a method for PRetraining IMU encoderS through a systematic and unified evaluation of various self-supervised and multimodal pretraining objectives. By combining self-supervision, multimodal supervision, and nearest-neighbor supervision, PRIMUS significantly enhances downstream performance. With fewer than 500 labeled samples per class, PRIMUS can improve test accuracy by up to 15\%, compared to state-of-the-art baselines. We aim to release the best-performing IMU encoders to benefit the broader community.

Chat is not available.