Authors: Zhendong Zhao, Xiaojun Chen, Dakui Wang, Yuexin Xuan, Gang Xiong
KeyWords: Graph Federated Learning, Adversarial Attacks
Abstract: Despite achieving superior performance for many graph-related tasks, recent works have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks on graph structures. In particular, by adding or removing a small number of carefully selected edges in a graph, an adversary can maliciously manipulate a GNNs-based classifier. The vulnerability to adversarial attacks poses numerous concerns for employing GNNs in real-world applications. Previously research aims to overcome the negative impact from adversarial edges with graph-based regularization of some heuristic properties. However, the real-world graph data is far more intricate, and these defense mechanisms do not fully utilize comprehensive semantic information of graph data. In this work, we present a novel defense method, Holistic Semantic Constraint Graph Neural Network (HSC-GNN), which approaches the joint modeling of the node features, labels, and the graph structure to mitigate the effects of malicious perturbations. Extensive experimental evaluation under various graph datasets demonstrates that our approach results in more robust node embedding and better performance than existing models.