Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Responsibly Building Next Generation of Multimodal Foundation Models

Comparison Visual Instruction Tuning

Wei Lin · Muhammad Jehanzeb Mirza · Sivan Doveh · Rogerio Feris · Raja Giryes · Sepp Hochreiter · Leonid Karlinsky

Keywords: [ Large Multimodal Models ] [ commonalities and differences ] [ visual instruction tuning ]


Abstract:

Comparing two images in terms of Commonalities and Differences (CaD) is a fundamental human capability that forms the basis of advanced visual reasoning and interpretation. It is essential for the generation of detailed and contextually relevant descriptions, performing comparative analysis, novelty detection, and making informed decisions based on visual data. However, surprisingly, little attention has been given to these fundamental concepts in the best current mimic of human visual intelligence - Large Multimodal Models (LMMs). We develop and contribute a new two-phase approach CaD-VI for collecting synthetic visual instructions, together with an instruction-following dataset CaD-Inst containing 349K image pairs with CaD instructions collected using CaD-VI. Our approach significantly improves the CaD spotting capabilities in LMMs, advancing the SOTA on a diverse set of related tasks by up to 17.5%. It is also complementary to existing difference-only instruction datasets, allowing automatic targeted refinement of those resources increasing their effectiveness for CaD tuning by up to 10%. Additionally, we propose an evaluation benchmark with 7.5K open-ended QAs to assess the CaD understanding abilities of LMMs.

Chat is not available.