Poster
Towards More Efficient Property Inference Attacks on Graph Neural Networks
Hanyang Yuan · Jiarong Xu · Renhong Huang · Mingli Song · Chunping Wang · YANG YANG
Graph neural networks (GNNs) have attracted considerable attention due to their diverse applications. However, the scarcity and quality limitations of graph data present challenges to their training process in practical settings. To facilitate the development of effective GNNs, companies and researchers often seek external collaboration. Yet, directly sharing data raises privacy concerns, motivating data owners to train GNNs on their private graphs and share the trained models instead. Unfortunately, these released models may still inadvertently disclose sensitive properties of their training graphs (e.g., average default rate in a transaction network), leading to severe consequences for data owners. Hence, it is vital for them to evaluate the risk of sensitive information leakage from shared models by devising graph property inference attacks. Existing approaches typically train numerous shadow models for developing such attack, which is computationally intensive and impractical. To address this issue, we propose an efficient graph property inference attack by leveraging model approximation techniques. Our method requires training only a minimal set of models on graphs, by introducing model approximation to generate a sufficient number of approximated models for attacks. Furthermore, to select approximated models with minimal approximation errors, we theoretically analyze the error bounds for each approximation. Meanwhile, we propose a diversity-enhancing mechanism based on edit distance to ensure diversity among approximated models. Extensive experiments across six real-world scenarios demonstrate our method's substantial improvement over baselines, with average increases of 2.7\% in attack accuracy and 5.6\% in ROC-AUC, while being 6.5× faster compared to the best baseline. Our code is available at: \url{https://anonymous.4open.science/r/efficient_gpia-8F47}.
Live content is unavailable. Log in and register to view live content