Poster
in
Workshop: GenAI for Health: Potential, Trust and Policy Compliance
Poster: Leveraging Large Language Models for Zero-Shot Detection and Mitigation of Data Poisoning in Wearable AI Systems
Malithi Mithsara Wanniarachchi Kankanamge · Abdur Shahid · Ning Yang
Keywords: [ Large Language Models ] [ Data Poisoning attack ] [ Wearable Health ]
Wearable AI systems, particularly in Human Activity Recognition (HAR), are becoming integral to applications in healthcare, security, and personal fitness due to the widespread adoption of smart devices and wearable technologies. However, the increasing reliance on machine learning models in HAR introduces significant risks, especially from poisoning attacks that compromise system reliability and data integrity. This paper explores the potential of Large Language Models (LLMs) to detect and sanitize poisoning attacks in wearable AI systems. Building on ongoing research into integrating LLMs within cyber-physical systems, we focus on sensor-based interactions with the physical world. Our case study seeks to answer the following question: How effective are LLMs in detecting and sanitizing poisoning attacks on human activity sensor data? Through zero-shot learning, we evaluate the performance of models such as ChatGPT 3.5, ChatGPT 4, and Gemini, providing insights into the viability of LLMs for real-time defense and data integrity in wearable AI systems.