On the Robustness of Post-hoc GNN Explainers to Label Noise

Zhiqiang Zhong, Yangqianzi Jiang, Davide Mottin

Research output: Contribution to conferencePaperResearchpeer-review

20 Downloads (Pure)

Abstract

Proposed as a solution to the inherent black-box limitations of graph neural networks (GNNs), post-hoc GNN explainers aim to provide precise and insightful explanations of the behaviours exhibited by trained GNNs. Despite their recent notable advancements in academic and industrial contexts, the robustness of post-hoc GNN explainers remains unexplored when confronted with label noise. To bridge this gap, we conduct a systematic empirical investigation to evaluate the efficacy of diverse post-hoc GNN explainers under varying degrees of label noise. Our results reveal several key insights: Firstly, post-hoc GNN explainers are susceptible to label perturbations. Secondly, even minor levels of label noise, inconsequential to GNN performance, harm the quality of generated explanations substantially. Lastly, we engage in a discourse regarding the progressive recovery of explanation effectiveness with escalating noise levels.
Original languageEnglish
Publication dateDec 2022
Publication statusPublished - Dec 2022
EventLearning on Graphs Conference 2022 - Virtual
Duration: 9 Dec 202212 Dec 2022

Conference

ConferenceLearning on Graphs Conference 2022
LocationVirtual
Period09/12/202212/12/2022

Fingerprint

Dive into the research topics of 'On the Robustness of Post-hoc GNN Explainers to Label Noise'. Together they form a unique fingerprint.

Cite this