Your browser doesn't support javascript.
loading
Quantifying uncertainty in graph neural network explanations.
Jiang, Junji; Ling, Chen; Li, Hongyi; Bai, Guangji; Zhao, Xujiang; Zhao, Liang.
Afiliación
  • Jiang J; School of Management, Fudan University, Shanghai, China.
  • Ling C; Department of Computer Science, Emory University, Atlanta, GA, United States.
  • Li H; School of Computer Science and Technology, Xidian University, Shanxi, China.
  • Bai G; Department of Computer Science, Emory University, Atlanta, GA, United States.
  • Zhao X; Data Science & System Security, NEC Labs America, Princeton, NJ, United States.
  • Zhao L; Department of Computer Science, Emory University, Atlanta, GA, United States.
Front Big Data ; 7: 1392662, 2024.
Article en En | MEDLINE | ID: mdl-38784676
ABSTRACT
In recent years, analyzing the explanation for the prediction of Graph Neural Networks (GNNs) has attracted increasing attention. Despite this progress, most existing methods do not adequately consider the inherent uncertainties stemming from the randomness of model parameters and graph data, which may lead to overconfidence and misguiding explanations. However, it is challenging for most of GNN explanation methods to quantify these uncertainties since they obtain the prediction explanation in a post-hoc and model-agnostic manner without considering the randomness of graph data and model parameters. To address the above problems, this paper proposes a novel uncertainty quantification framework for GNN explanations. For mitigating the randomness of graph data in the explanation, our framework accounts for two distinct data uncertainties, allowing for a direct assessment of the uncertainty in GNN explanations. For mitigating the randomness of learned model parameters, our method learns the parameter distribution directly from the data, obviating the need for assumptions about specific distributions. Moreover, the explanation uncertainty within model parameters is also quantified based on the learned parameter distributions. This holistic approach can integrate with any post-hoc GNN explanation methods. Empirical results from our study show that our proposed method sets a new standard for GNN explanation performance across diverse real-world graph benchmarks.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Big Data Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Big Data Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Suiza