Skip to yearly menu bar Skip to main content


Poster

V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark

Yi Xin · Siqi Luo · Xuyang Liu · Haodi Zhou · Xinyu Cheng · Christina Lee · Junlong Du · Yuntao Du. · Haozhe Wang · MingCai Chen · Ting Liu · Guimin Hu · Zhongwei Wan · 荣超 张 · Aoxue Li · Mingyang Yi · Xiaohong Liu

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Parameter-efficient transfer learning (PETL) methods show promise in adapting a pre-trained model to various downstream tasks while training only a few parameters. In the computer vision (CV) domain, numerous PETL algorithms have been proposed, but their direct employment or comparison remains inconvenient. To address this challenge, we construct a Unified Visual PETL Benchmark (V-PETL Bench) for the CV domain by selecting 30 diverse, challenging, and comprehensive datasets from image recognition, video action recognition, and dense prediction tasks. On these datasets, we systematically evaluate 25 dominant PETL algorithms and open-source a modular and extensible codebase for fair evaluation of these algorithms. V-PETL Bench runs on NVIDIA A800 GPUs and requires approximately 310 GPU days. We release all the checkpoints and training logs, making it more efficient and friendly to researchers. Additionally, V-PETL Bench will be continuously updated for new PETL algorithms and CV tasks.

Live content is unavailable. Log in and register to view live content