Skip to yearly menu bar Skip to main content


Poster

Computerized Adaptive Testing via Collaborative Ranking

Zirui Liu · Yan Zhuang · Qi Liu · Jiatong Li · Yuren Zhang · Zhenya Huang · Jinze Wu · Shijin Wang

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

As machine learning advances in the realm of intelligent education, Computerized Adaptive Testing (CAT) has garnered significant attention. CAT is recognized as an efficient testing methodology capable of accurately estimating a student's ability with a minimal number of questions. Compared to traditional paper-and-pencil tests, CAT has been shown to require fewer questions to achieve the same level of precision in ability estimation, leading to its widespread adoption in mainstream selective exams such as the GMAT and GRE. While all CAT methods have traditionally prioritized accurate ability estimation, the practical performance of current SOTA approaches in student ranking has been notably deficient, particularly in scenarios where student ranking is crucial (e.g., high-stakes exams). This paper addresses the inconsistency issue in CAT ranking by emphasizing the importance of aligning test outcomes with students' true underlying abilities. Departing from the conventional independent testing paradigm among students, this paper introduces a collaborative method that leverages inter-student information to enhance student ranking. To tackle this challenge, we propose the Collaborative Computerized Adaptive Testing (CCAT) framework. CCAT uses collaborative students as anchors to assist in ranking test-takers, offering both theoretical guarantees and experimental validation for ensuring ranking consistency.

Live content is unavailable. Log in and register to view live content