Skip to yearly menu bar Skip to main content


Poster

Language Hierarchical Self-training for Detecting Twenty-thousand Classes using Image-level Supervision

Jiaxing Huang · Jingyi Zhang · Kai Jiang · Shijian Lu

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Recent studies on generalizable object detection have attracted increasing attention with additional weak supervision from large-scale datasets with image-level labels.However, weakly-supervised detection learning often suffers from image-to-box label mismatch, i.e., image-levellabels do not convey precise object information.We design Language Hierarchical Self-training (LHST) that introduces language hierarchy into weakly-supervised detector training for learning more generalizable detectors.LHST expands the image-level labels with language hierarchy and enables co-regularization between the expanded labels and self-training. Specifically, the expanded labels regularize self-training by providing richer supervision and mitigating the image-to-box label mismatch, while self-training allows assessing and selecting the expanded labels according to the predicted reliability.In addition, we design language hierarchical prompt generation that introduces language hierarchy into prompt generation which helps bridge the vocabulary gaps between training and testing.Extensive experiments show that the proposed techniques achieve superior generalization performance consistently across 14 widely studied object detection datasets.

Live content is unavailable. Log in and register to view live content