Skip to yearly menu bar Skip to main content


Poster
in
Affinity Event: LatinX in AI

Adaptive LoRA Merging for Efficient Domain Incremental Learning

Luigi Quarantiello · Eric Nuertey Coleman · Julio Hurtado · Vincenzo Lomonaco


Abstract:

Merging Low-Rank Adaptation (LoRA) has become an essential centre for research and development. However, it is unclear how these methods behave in dynamic scenarios like Domain Incremental Learning (DIL).Here, we address a key limitation of current merging algorithms: their overreliance on fixed weights that usually assume equal importance across tasks. Our method dynamically computes the coefficient for merging, allowing for continuous adaptation to new domains while adjusting the influence of previous ones. We evaluated our approach against some current state-of-the-art merging algorithms on two DIL benchmarks: PACS and OfficeHome. Our results show that the adaptive merging technique achieves performance comparable to or superior to fixed-weight methods while eliminating the need for manual weight selection. In particular, our method maintains high accuracy with minimal memory requirements, using as little as one sample per class for coefficient learning. This work showcases a promising use of LoRA adapters and merging algorithms in continual learning, providing a valuable direction for future research.

Live content is unavailable. Log in and register to view live content