검색 상세

Graph Neural Networks with Graph Structure Learning for Effective Representation Learning

초록/요약

Graph-structured data is ubiquitous in a wide range of domains ranging from biology to social network analysis and recommendation systems. Recently, various types of Graph Neural Networks (GNNs) have been proposed to learn representations over graph-structured data. With the ability of iterative aggregation of features from neighbors using non-linear transformations, GNNs have shown significant improvements over traditional methods and achieved state-of-the-art performance on various tasks such as node classification and graph classification. Despite the effectiveness of GNNs to learn representations on graphs, most GNNs are highly susceptible to the quality of input graph structures since they assume that the given graphs are accurate and fixed. This is problematic especially when the graph structure is noisy and GNNs on a noisy graph often underperform a simple Multi-Layer Perceptron that does not utilize the graph structure at all. In addition, GNNs usually utilize the graph structure to generate smoothed node features with an aggregation of neighbors' features, but do not explicitly leverage the graph structure to learn useful structural features. It makes GNNs heavily rely on smoothed node features rather than graph structure, which often shows poor performance than simple heuristic methods in several tasks where the structural information is crucial. In this thesis, we investigate how Graph Neural Networks jointly learn a useful graph structure and effective node representations on the useful graph structure. First, we introduce Graph Transformer Networks, which learns to transform a heterogeneous input graph into useful meta-path graphs for each task and learn node representation on the graphs in an end-to-end fashion. Then we introduce an enhanced version of GTNs, Fast Graph Transformer Networks (FastGTNs), that implicitly transform the graphs without the multiplication of huge adjacency matrices which requires excessive resources while allowing the identical transformations of GTNs. Finally, we introduce Neighborhood Overlap-aware Graph Neural Networks (Neo-GNNs) that learn useful structural features from an adjacency matrix and estimate overlapped neighborhoods for link prediction.

more

목차

Abstract
초록
Contents
List of Figures
List of Tables
1 Introduction 1
1.1 What is Graph-structured Data 1
1.2 Graph Neural Networks 2
1.3 Challenges on Graph Neural Networks 3
1.4 Structure of the thesis 3
2 Graph Transformer Networks 5
2.1 Background 6
2.2 Methods 7
2.2.1 Preliminaries 8
2.2.2 Learning Meta-Path Graphs 8
2.2.3 Graph Transformer Networks 10
2.3 Experiments 12
2.3.1 Baselines 13
2.3.2 Results on Node Classification 14
2.3.3 Interpretation of Graph Transformer Networks 15
2.4 Summary 18
3 Fast Graph Transformer Networks 19
3.1 Background 20
3.2 Methods 21
3.2.1 Fast Graph Transformer Networks 21
3.2.2 Non-Local Operations 26
3.2.3 Relations to Other GNN Architectures 27
3.3 Experiments 30
3.3.1 Experimental Settings 31
3.3.2 Exactness of FastGTNs 32
3.3.3 Efficiency of FastGTNs 34
3.3.4 Results on Node Classification 35
3.4 Summary 37
4 Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction 39
4.1 Background 40
4.2 Methods 41
4.2.1 Preliminaries 42
4.2.2 Neighborhood Overlap-based Heuristic Methods 42
4.2.3 Neighborhood Overlap-aware Graph Neural Networks 44
4.3 Experiments 47
4.3.1 Experiment Settings 47
4.3.2 Results on Link Prediction 49
4.3.3 Ablation Studies 50
4.3.4 Analysis on learning neighborhood overlap-based heuristic methods 53
4.4 Summary 53
5 Conclusion 54
References 56
Acknowledgement 67

more