Poster
AutoCV: Empowering Reasoning with Automated Process Labeling via Confidence Variation
Jianqiao Lu · Zhiyang Dou · Hongru WANG · Zeyu Cao · Jianbo Dai · Yunlong Feng · Zhijiang Guo
In this work, we propose a novel method named \textbf{Auto}mated Process Labeling via \textbf{C}onfidence \textbf{V}ariation (\textbf{\textsc{AutoCV}}) to enhance the reasoning capabilities of large language models (LLMs) by automatically annotating the reasoning steps.Our approach begins by training a verification model on the correctness of final answers, enabling it to generate automatic process annotations. This verification model assigns a confidence score to each reasoning step, indicating the probability of arriving at the correct final answer from that point onward.We detect relative changes in the verification's confidence scores across reasoning steps to automatically annotate the reasoning process. This alleviates the need for numerous manual annotations or the high computational costs associated with model-induced annotation approaches.We experimentally validate that the confidence variations learned by the verification model trained on the final answer correctness can effectively identify errors in the reasoning steps.Subsequently, we demonstrate that the process annotations generated by \name can improve the accuracy of the verification model in selecting the correct answer from multiple outputs generated by LLMs. Notably, we achieve substantial improvements across five datasets in mathematics and commonsense reasoning. Our anonymous codes are submitted with the paper and will be publicly available.
Live content is unavailable. Log in and register to view live content