Poster
in
Workshop: Multimodal Algorithmic Reasoning Workshop
Chitrarth: Bridging Vision and Language for a Billion People
Shaharukh Khan · Ayush Tarun · Abhinav Ravi · Ali Faraz · Praveen Kumar Pokala · Anagha Bhangare · Raja Kolla · Chandra Khatri · Shubham Agarwal
Recent multimodal foundation models are primarily trained on English or high resource European language data, which limits their applicability to other medium and low-resource languages. To address this limitation, we introduce Chitrarth (Chitra: Image; Artha: Meaning), an inclusive Vision-Language Model (VLM), specifically targeting the rich linguistic diversity and visual reasoning across 10 prominent Indian languages. Our model effectively integrates a state-of-the-art (SOTA) multilingual Large Language Model (LLM) with a vision module, primarily trained on multilingual image-text data. Furthermore, we also introduce BharatBench, a comprehensive framework for evaluating VLMs across various Indian languages, ultimately contributing to more diverse and effective AI systems. Our model achieves SOTA results for benchmarks across low resource languages while retaining its efficiency in English. Through our research, we aim to set new benchmarks in multilingual-multimodal capabilities, offering substantial improvements over existing models and establishing a foundation to facilitate future advancements in this arena.