Poster
in
Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)
What’s important here?: Opportunities and Challenges of LLM in retrieving information from Web Interface
Faria Huq · Jeffrey Bigham · Nikolas Martelaro
Large language models (LLMs) that have been trained on large corpus of codes exhibit a remarkable ability to understand HTML code [1]. As web interfaces are mainly constructed using HTML, we designed an in-depth study to see how the code understanding ability of LLMs can be used to retrieve and locate important elements for a user given query (i.e. task description) in web interface. In contrast with prior works, which primarily focused on autonomous web navigation, we decompose the problem as an even atomic operation - Can LLMs find out the important information in the web page for a user given query? This decomposition enables us to scrutinize the current capabilities of LLMs and uncover the opportunities and challenges they present. Our empirical experiments show that the LLMs exhibit a reasonable level of competence, there is still a substantial room for improvement. We hope our investigation will inspire follow-up works in overcoming the current challenges in this domain.