## Table of Contents[Hide][Show]

The deep learning techniques known as “graph neural networks” (GNNs) operate in the graph domain. These networks have lately found use in a variety of fields, including computer vision, recommender systems, and combinatorial optimization, to name a few.

In addition, these networks can be used to represent complex systems, including social networks, protein-protein interaction networks, knowledge graphs, and others in several fields of study.

The non-euclidean space is where graph data operate, in contrast to other types of data like pictures. In order to classify nodes, predict links, and cluster data, graph analysis is used.

In this article, we’ll examine the Graph Neural Network in detail, its types, as well as provide practical examples using PyTorch.

## So, what is Graph?

A graph is a type of data structure made up of nodes and vertices. The connections between the various nodes are determined by the vertices. If the direction is indicated in the nodes, the graph is said to be directed; otherwise, it is undirected.

A good application of graphs is modeling the relationships among various individuals in a social network. When dealing with complex circumstances, such as links and exchanges, graphs are very helpful.

They are employed by recommendation systems, semantic analysis, social network analysis, and pattern recognition

. Creating graph-based solutions is a brand-new field that offers an insightful understanding of complex and interrelated data.

## Graph Neural Network

Graph neural networks are specialized neural network types that can operate on a graph data format. Graph embedding and convolutional neural networks (CNNs) have a significant impact on them.

Graph Neural Networks are employed in tasks that include predicting nodes, edges, and graphs.

- CNN’s are used to classify images. Similarly, to predict a class, GNNs are applied to the pixel grid that represents the graph structure.
- Text categorization using recurrence neural networks. GNNs are also used with graph architectures where each word in a phrase is a node.

In order to forecast nodes, edges, or complete graphs, neural networks are used to create GNNs. A prediction at the node level, for instance, can resolve a problem like spam detection.

Link prediction is a typical case in recommender systems and might be an example of an edge-wise prediction problem.

## Graph Neural Network Types

Numerous neural network types exist, and Convolutional Neural Networks are present in the majority of them. We will learn about the most well-known GNNs in this part.

### Graph Convolutional Networks (GCNs)

They are comparable to classic CNNs. It acquires characteristics by looking at nearby nodes. The activation function is used by GNNs to add non-linearity after aggregating node vectors and sending the output to the dense layer.

It is made up of Graph convolution, a linear layer, and a non-learner activation function, in essence. GCNs come in two main varieties: Spectral Convolutional Networks and Spatial Convolutional Networks.

### Graph Auto-Encoder Networks

It uses an encoder to learn how to represent graphs and a decoder to try to reconstruct input graphs. There is a bottleneck layer connecting the encoder and decoder.

Since auto-encoders do an excellent job of handling class balance, they are frequently utilized in link prediction.

### Recurrent Graph Neural Networks (RGNNs)

In multi-relational networks, where a single node has numerous relations, it learns the optimal diffusion pattern and can manage the graphs. In order to increase smoothness and reduce over-parameterization, regularizers are used in this form of graph neural network.

In order to get better outcomes, RGNNs require less processing power. They are utilized for text generation, speech recognition, machine translation, picture description, video tagging, and text summarization.

### Gated Neural Graph Networks (GGNNs)

When it comes to long-term dependent tasks, they outperform RGNNs. By including node, edge, and temporal gates on long-term dependencies, gated graph neural networks enhance recurrent graph neural networks.

The gates function similarly to Gated Recurrent Units (GRUs) in that they are used to recalling and forget data in various stages.

## Implementing Graph Neural Network using Pytorch

The specific issue we’ll be focusing on is a common node categorization issue. We have a sizable social network called musae-github, which was compiled from the open API, for GitHub developers.

Edges show the mutual follower relationships between the nodes, which represent developers (platform users) who have starred in at least 10 repositories (note that the word mutual indicates an undirected relationship).

Based on the node’s location, starred repositories, employer, and email address, node characteristics are retrieved. Predicting if a GitHub user is a web developer or a machine learning developer is our task.

The job title of each user served as the basis for this targeting function.

### Installing PyTorch

To begin, we first need to install PyTorch. You can configure it according to your machine from here. Here is mine:

### Importing modules

Now, we import the necessary modules

### Importing and Explore the data

The following step is to read the data and plot the first five rows and the last five rows from the labels file.

Only two of the four columns—the node’s id (i.e., user) and ml_target, which is 1 if the user is a member of the machine learning community and 0 otherwise—are relevant to us in this situation.

Given that there are just two classes, we can now be certain that our task is a binary classification issue.

As a result of significant class imbalances, the classifier can just assume which class is the majority rather than evaluating the underrepresented class, making class balance another crucial factor to consider.

Plotting the histogram (frequency distribution) reveals some imbalance because there are fewer classes from machine learning (label=1) than from the other classes.

### Feature Encoding

The nodes’ characteristics inform us of the feature that is associated with each node. By implementing our method to encode data, we can instantly encode those characteristics.

We want to utilize this method to encapsulate a small part of the network (say, 60 nodes) for display. The code is listed here.

### Designing and displaying graphs

We’ll utilize torch geometric. data to build our graph.

To model a single graph with different (optional) properties, data that is a simple Python object is used. By utilizing this class and the following attributes—all of which are torch tensors—we will create our graph object.

The form of the value x, which will be allocated to the encoded node features, is [number of nodes, number of features].

The shape of y is [number of nodes], and it will be applied to the node labels.

edge index: In order to describe an undirected graph, we need to expand the original edge indices in order to allow for the existence of two distinct directed edges that link the same two nodes but point in opposite directions.

A pair of edges, one pointing from node 100 to 200 and the other from 200 to 100, is required, for instance, between nodes 100 and 200. If the edge indices are provided, then this is how the undirected graph can be represented. [2,2*number of original edges] will be the tensor form.

We create our draw graph method to display a graph. The first step is to transform our homogeneous network into a NetworkX graph, which can then be drawn using NetworkX.draw.

### Make our GNN model and train it

We begin by encoding the whole set of data by executing encode data with light=False and then calling construct graph with light=False to build the entire graph. We won’t attempt to draw this large graph because I’m presuming you’re using a local machine that has limited resources.

Masks, which are binary vectors that identify which nodes belong to each specific mask using the digits 0 and 1, can be used to notify the training phase which nodes should be included during training and to tell the inference phase which nodes are the test data. Torch geometric.transforms.

A node-level split can be added using the training mask, val mask, and test mask properties of the AddTrainValTestMask class, which can be used to take a graph and enable us to specify how we want our masks to be constructed.

We just utilize 10% for training and use 60% of the data as the test set while using 30% as the validation set.

Now, we will stack two GCNConv layers, the first of which has an output feature count of that is equal to the number of features in our graph as input features.

In the second layer, which contains output nodes equal to the number of our classes, we apply a relu activation function and supply the latent features.

Edge index and edge weight are two of the many options x that GCNConv can accept in the forward function, but in our situation, we only need the first two variables.

Despite the fact that our model will be able to predict the class of every node in the graph, we still need to determine the accuracy and loss for each set separately depending on the phase.

For instance, during training, we only want to utilize the training set to determine the accuracy and training loss, and therefore this is where our masks come in handy.

To calculate the appropriate loss and accuracy, we will define the functions of masked loss and masked accuracy.

### Training the model

Now that we have defined the training purpose for which the torch will be used. Adam is a master optimizer.

We’ll conduct the training for a certain number of epochs while keeping an eye on the validation accuracy.

We also plot the training’s losses and accuracies throughout different epochs.

## Disadvantages of Graph Neural Network

Using GNNs has a few disadvantages. When to employ GNNa and how to enhance the performance of our machine learning models will both be made clear to us after we have a better understanding of them.

- While GNNs are shallow networks, typically with three layers, most neural networks can go deep to improve performance. We are unable to perform at the cutting edge on big datasets due to this limitation.
- It is more difficult to train a model on graphs, since their structural dynamics are dynamic.
- Due to the high computational costs of these networks, scaling the model for production presents challenges. Scaling the GNNs for production will be challenging if your graph structure is huge and complicated.

## Conclusion

Over the past few years, GNNs have developed into powerful and effective tools for machine learning issues in the graph domain. A fundamental overview of graph neural networks is given in this article.

After that, you can start creating the dataset that will be used to train and test the model. To understand how it functions and what it is capable of, you can also go much farther and train it using a different kind of dataset.

Happy Coding!

## Leave a Reply