TOWARD ROBUST AND INTERPRETABLE GRAPH AND IMAGE REPRESENTATION LEARNING
Although deep learning models continue to gain momentum, their robustness and interpretability have always been a big concern because of the complexity of such models. In this dissertation, we studied several topics on the robustness and interpretability of convolutional neural networks (CNNs) and graph neural networks (GNNs). We first identified the structural problem of deep convolutional neural networks that leads to the adversarial examples and defined DNN uncertainty regions. We also argued that the generalization error, the large sample theoretical guarantee established for DNN, cannot adequately capture the phenomenon of adversarial examples. Secondly, we studied the dropout in GNNs, which is an effective regularization approach to prevent overfitting. Contrary to CNN, GNN usually has a shallow structure because a deep GNN normally sees performance degradation. We studied different dropout schemes and established a connection between dropout and over-smoothing in GNNs. Therefore we developed layer-wise compensation dropout, which allows GNN to go deeper without suffering performance degradation. We also developed a heteroscedastic dropout which effectively deals with a large number of missing node features due to heavy experimental noise or privacy issues. Lastly, we studied the interpretability of graph neural networks. We developed a self-interpretable GNN structure that denoises useless edges or features, leading to a more efficient message-passing process. The GNN prediction and explanation accuracy were boosted compared with baseline models.
History
Degree Type
- Doctor of Philosophy
Department
- Statistics
Campus location
- West Lafayette