Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematics of Modern Machine Learning (M3L)

Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain Learning

Alexey Rukhovich · Alexander Podolskiy · Irina Piontkovskaya

Keywords: [ Domain interaction ] [ Gradient dynamics ] [ Lie bracket ] [ Multi-domain learning ]


Abstract:

In multi-domain learning, a single model is trained on diverse data domains to leverage shared knowledge and improve generalization. The order in which the data from these domains is used for training can significantly affect the model's performance on each domain. However, this dependence is under-studied. In this paper, we investigate the influence of training order (or data mixing) in multi-domain learning using the concept of Lie bracket of gradient vector fields. By analyzing the infinitesimal effects of changing the training order, we identify regions in the parameter space where altering the order between two training domains can benefit the target loss. We validate the predictions of our theoretical framework on the influence of training order (or data mixing) both on a toy example and bilingual LLM pre-training.

Chat is not available.