EEG Sensors Relational Analysis and Adjacency Matrix Generation

0.00

Total downloads: 15

Download Code  Discuss Code

Description

Depression Detection Using Advanced Graphical Deep Learning – Part 2 (Adjacency Matrix Generation)

Introduction

Electroencephalography (EEG) is widely utilized in clinical settings for disease diagnosis due to its high temporal resolution, non-invasiveness, and inexpensive data-gathering costs. Research in psychological and cognitive sciences has shown that EEG signals may be used to reflect most psychological and cognitive functions. Prior research has utilized approaches such as dimensionality reduction or extraction of frequency band signals as features, followed by the application of machine learning algorithms as classifiers to carry out specific tasks. Nevertheless, the efficacy of these methods is highly dependent on the precision of the chosen features, and there is no connection between the classifier and the previously retrieved features.

Prior studies have aimed to utilize deep learning methods on EEG signals for purposes such as identifying emotions, visualizing motor actions, and diagnosing diseases. Several research have endeavored to extract efficient biomarkers from EEG data for the purpose of detecting depression. The utilized deep learning techniques can be roughly classified into two categories: CNN-based (Convolutional Neural Network) and GNN-based (Graph Neural Network). CNN-based algorithms see EEG signals as images and employ diverse convolutional kernels to extract characteristics from them.

However, they do not fully take into account the interconnection between different channels. Conversely, GNN-based algorithms convert EEG signals into graph-structured data and utilize pre-computed graph adjacency matrices to represent the connections between different channels. These approaches consider the potential spatial structural linkages among different channels, making it easier to extract features that are connected across channels.

When diagnosing depression using GNN, it is usually necessary to pre-calculate the adjacency matrix. However, this approach fails to include the differences in brain network connection between individuals with depression and those who are healthy. In order to address this problem, we suggest implementing an Adaptive network Topology Generation (AGTG) module. This module will create the connections between nodes in a network by using EEG signals in a flexible manner, namely by generating an adjacency matrix. The objective of our approach is to achieve an enhanced connection that is both adaptable and accurate by including geographic distance and correlations among various channels.

Challenges

The current GNN-based algorithms continue to encounter the following obstacles:

  • Firstly, there exist disparities in the brain networks among distinct individuals, and the neural systems of the human brain are exceedingly intricate.
  • The existing methods are inadequate in precisely constructing comprehensive brain network topological structures, especially when it comes to mimicking the dynamic changes of the brain network.
  • Furthermore, the current research has not incorporated the temporal dependency information of brain networks.

Requirement of the Graphical Data Conversion

When diagnosing depression using GNN, it is usually necessary to pre-calculate the adjacency matrix. However, this approach fails to include the differences in brain network connection between individuals with depression and those who are healthy. In order to address this problem, we suggest implementing an Adaptive network Topology Generation (AGTG) module. This module will create the connections between different elements of a network by using EEG signals. The connections will be generated in a way that adapts to the specific circumstances, namely through the generation of an adjacency matrix. The objective is to achieve enhanced connection that is both adaptable and accurate by including spatial distance and correlations across several channels. Combining Graph Neural Network (GNN) with Gated Recurrent Unit (GRU) to effectively capture the relationships between spatial and temporal elements in EEG signals. We apply the Graph Neural Network (GNN) to combine node properties in graph data in order to produce spatial correlations. Additionally, we employ the Gated Recurrent Unit (GRU) to capture the dynamic changes of brain networks and obtain time series correlations.

Adjacency Matrix

The adjacency matrix is the primary characteristic of graph-structured data. Prior studies on the topological connectivity of brain networks can be broadly categorized into two groups: adjacency matrices derived using pre-established criteria, and adjacency matrices derived from the formation of neural networks. The computation formula for adjacency matrices is constructed based on specified methodologies, taking into account past knowledge in fields such as biomedicine. Prior to training the deep learning model, the adjacency matrix is explicitly defined. The neural network development approach utilizes learnable modules within the model and dynamically constructs the adjacency matrix. The Adaptive Graph Topology Generation (AGTG) module constructs the topological connections of a graph by generating an adjacency matrix based on EEG signals. This process is done in an adaptive manner, with the goal of achieving more flexible and precise connectivity. The module achieves this by integrating both spatial distance and correlations between different channels.

Code Description

The AGTG_Model class represents convolution processes of the graph neural network as shown in Figure 1. It initializes parameters P, Q, b, and d for the model, where P and Q are matrices, b is a vector, and d is a scalar. The D_Matrix method calculates the inverse square root of the degree matrix D for a given adjacency matrix A. The forward method computes the normalized adjacency matrix A_norm using the ReLU activation function, the initialized parameters, the feature matrix X, and the input adjacency matrix. The normalization is done using the degree matrix D of A+I (where I is the identity matrix), resulting in a symmetrically normalized adjacency matrix. This model could be used in tasks like node classification or link prediction in graph-structured data.

class AGTG_Model(nn.Module):
                def __init__(self, nodes_dim, node_features_dim):
                        super().__init__()
                        self.E = nodes_dim   # 
                        self.F = node_features_dim
                        self.P = nn.Parameter(torch.randn(self.E,self.E), requires_grad=True)
                        self.Q = nn.Parameter(torch.randn(self.F,self.E), requires_grad=True)
                        self.b = nn.Parameter(torch.randn(self.E,1), requires_grad=True)
                        self.d = nn.Parameter(torch.randn(1), requires_grad=True)

                def D_Matrix(self, A):
                         d = torch.sum(A, 1)
                         d_inv_sqrt = d**(-(1/2))
                         D_inv_sqrt = torch.diag(d_inv_sqrt)
                         return D_inv_sqrt

                def forward(self, X, adjacency_matrix):
                        I = torch.eye(self.E)
                        PX = torch.matmul(self.P, X)
                        PXQ = torch.matmul(PX, self.Q)
                        A_cor = torch.abs(PXQ+self.b)
                        A = nn.functional.relu(A_cor+adjacency_matrix*self.d)
                        A_I = A+I
                        D = self.D_Matrix(A_I)
                        A_norm = torch.matmul(D, torch.matmul(A_I, D))
                        return A_norm

Figure 1: Graph Neural Network

The GCN_Layer class is a PyTorch module representing a Graph Convolutional Network (GCN) layer. It initializes a weight matrix W with dimensions input_dim by output_dim using Xavier uniform initialization, and sets a dropout probability. In the forward method, it first computes the product of the adjacency matrix, the input features x, and the weight matrix W. It then applies the ReLU activation function to this product to obtain the hidden representation. Finally, it applies dropout to the hidden representation with the specified dropout probability, and returns the result. This layer can be used to learn node representations in a graph. The code visualization is depicted through Figure 2.

class GCN_Layer(nn.Module):
        def __init__(self, input_dim, output_dim, dropout_prob=0.5):
                super().__init__()
                self.W = nn.Parameter(torch.FloatTensor(input_dim, output_dim))
                nn.init.xavier_uniform_(self.W)
                self.dropout_prob = dropout_prob

        def forward(self, x, adjacency_matrix):
                # Compute the input to the layer: AXW
                AXW = torch.matmul(adjacency_matrix, torch.matmul(x, self.W))

                # Apply the ReLU activation function to the input of the layer: ReLU(AXW)
                hidden_rep = torch.relu(AXW)

                # Apply dropout
                hidden_rep = nn.functional.dropout(hidden_rep, p=self.dropout_prob, training=self.training)

                return hidden_rep

Figure 2: Layers of Graph Neural Network

A list of GCN layers are initialized as shown in Figure 3, each of which is created by the _create_gcn_layer method. The first layer takes the node features as input, and the subsequent layers take the output of the previous layer as input. In the forward method, it applies each GCN layer to the input features and the adaptive adjacency matrix, appends the output of each layer to a list, and then concatenates the outputs of all layers along the feature dimension. The concatenated output is returned. This model can be used for tasks like node classification or link prediction in graph-structured data.

class GCN_Model(nn.Module):
        def __init__(self, node_features_dim, hidden_dim, num_layers=2, dropout_prob=0.5):
                super().__init__()
                self.num_layers = num_layers
                self.dropout_prob = dropout_prob

                self.gcn_layers = nn.ModuleList([self._create_gcn_layer(node_features_dim, hidden_dim) if i == 0
                                                                                 else self._create_gcn_layer(hidden_dim, hidden_dim)
                                                                                 for i in range(self.num_layers)])

        def _create_gcn_layer(self, input_dim, output_dim):
                return GCN_Layer(input_dim, output_dim, dropout_prob=self.dropout_prob)

        def forward(self, features, adaptive_adjacency_matrix):
                x = features
                layers = []
                for i in range(self.num_layers):
                        x = self.gcn_layers[i](x, adaptive_adjacency_matrix)
                        layers.append(x)
                y = torch.cat(layers, dim=1)
                return y

Figure 3: GCN Model

The GRU_Cell here represents a Gated Recurrent Unit (GRU) cell, as represented in Figure 4. It initializes weight matrices and bias vectors for the reset gate (Wr_x, Wr_h, br), update gate (Wu_x, Wu_h, bu), and candidate hidden state (Wc_x, Wc_h, bc). In the forward method, it computes the reset gate r_t_0, update gate u_t_0, and candidate hidden state c_t_0 using the input features GCN_x_t_0 and previous hidden state h_t_1. It then computes the current hidden state h_t_0 as a combination of the previous hidden state and the candidate hidden state, controlled by the update gate. This GRU cell can be used in recurrent neural networks for tasks like node classification or sequence prediction in graph-structured data.

class GRU_Cell(nn.Module):
    def __init__(self, nodes_dim):
            E = nodes_dim
            super().__init__()

            self.Wr_x = nn.Parameter(torch.randn(E, E), requires_grad=True)
            self.Wr_h = nn.Parameter(torch.randn(E, E), requires_grad=True)
            self.br   = nn.Parameter(torch.randn(E, 1), requires_grad=True)

            self.Wu_x = nn.Parameter(torch.randn(E, E), requires_grad=True)
            self.Wu_h = nn.Parameter(torch.randn(E, E), requires_grad=True)
            self.bu   = nn.Parameter(torch.randn(E, 1), requires_grad=True)

            self.Wc_x = nn.Parameter(torch.randn(E, E), requires_grad=True)
            self.Wc_h = nn.Parameter(torch.randn(E, E), requires_grad=True)
            self.bc   = nn.Parameter(torch.randn(E, 1), requires_grad=True)

    def forward(self, features_t_0, hidden_state_t_1):
            GCN_x_t_0, h_t_1 = features_t_0, hidden_state_t_1
            r_t_0 = nn.functional.sigmoid(torch.matmul(self.Wr_x, GCN_x_t_0) + torch.matmul(self.Wr_h, h_t_1) + self.br)
            u_t_0 = nn.functional.sigmoid(torch.matmul(self.Wu_x, GCN_x_t_0) + torch.matmul(self.Wu_h, h_t_1) + self.bu)
            c_t_0 = nn.functional.tanh(torch.matmul(self.Wc_x, GCN_x_t_0) + torch.matmul(self.Wc_h, torch.mul(r_t_0, h_t_1)) + self.bc)
            h_t_0 = torch.mul(u_t_0, h_t_1) + torch.mul(1 - u_t_0, c_t_0)
            return h_t_0

**Figure 4: Gated Recurrent Module

A model combining an Adaptive Graph-Topology Transformer (AGTG), a Graph Convolutional Network (GCN), and a Gated Recurrent Unit (GRU) initializes the AGTG layer, GCN model, GRU cell, and initial hidden state H_0. In the forward method, it iterates over the sequence length, at each step computing an adaptive adjacency matrix using the AGTG layer, applying the GCN model to the input features and the adaptive adjacency matrix to get a GCN output, and then feeding this output and the previous hidden state into the GRU cell to get the current hidden state. The final hidden state and adaptive adjacency matrix are returned. This model can be used for tasks like node classification or sequence prediction in graph-structured data. The pseudo-code is represented in Figure 5.

class GRU_Model(nn.Module):
    def __init__(self, nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len):
            super().__init__()
            self.Seq_len = Seq_len
            self.agtg_Adj_Matrix_layer = self._create_AGTG_Adj_Matrix(nodes_Dim, node_Features_Dim)
            self.gcn_model_layer = self._create_GCN_Model(node_Features_Dim, hidden_dim, num_layers, dropout_prob)
            self.gru_cell_layer = self._create_GRU_Cell(nodes_Dim)
            self.H_0 = torch.zeros(nodes_Dim, 2 * hidden_dim)

    def _create_AGTG_Adj_Matrix(self, nodes_dim, node_features_dim):
            return AGTG_Model(nodes_dim, node_features_dim)

    def _create_GCN_Model(self, node_features_dim, hidden_dim, num_layers, dropout_prob):
            return GCN_Model(node_features_dim, hidden_dim, num_layers, dropout_prob)

    def _create_GRU_Cell(self, nodes_dim):
            return GRU_Cell(nodes_dim)

    def forward(self, input_seq, adjacency_matrix):
            x, adj = input_seq, adjacency_matrix
            steps, _, _ = x.shape
            assert self.Seq_len == steps

            H = self.H_0
            for i in range(self.Seq_len):
                    adaptive_adj_i = self.agtg_Adj_Matrix_layer(x[i], adj)
                    gcn_Gi = self.gcn_model_layer(x[i], adaptive_adj_i)
                    gru_Hi = self.gru_cell_layer(gcn_Gi, H)
                    H = gru_Hi  
            return H, adaptive_adj_i

Figure 5: Adaptive Graph-Topology Transformer

The GraphTopologyMaxPooling_Model represents a model combining a GRU model and a max pooling operation over graph nodes as shown in Figure 6. It initializes the GRU model and parameters W, b, and W_logit. In the forward method, it first applies the GRU model to the input sequence and adjacency matrix to get a hidden state H and an adaptive adjacency matrix A. It then computes a score for each node, selects the node with the highest score, and creates a mask for this node. It multiplies the mask with the hidden state to get a graph-level representation V_graph, and computes a logit by multiplying the sum of V_graph with W_logit. The logit is returned. This model can be used for graph classification tasks.

class GraphTopologyMaxPooling_Model(nn.Module):
        def __init__(self, nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len):
                super().__init__()
                self.E = nodes_Dim
                self.W = nn.Parameter(torch.randn(2*hidden_dim, 2*hidden_dim), requires_grad=True)
                self.b = nn.Parameter(torch.randn(nodes_Dim, 1), requires_grad=True)
                self.W_logit = nn.Parameter(torch.randn(2*hidden_dim), requires_grad=True)
                self.gru_Model = self._create_GRU_Model(nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len)

        def _create_GRU_Model(self, nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len):
                return GRU_Model(nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len)

        def forward(self, input_seq, adjacency_matrix):
                H, A = self.gru_Model(input_seq, adjacency_matrix)
                AHW_b = torch.matmul(torch.matmul(A, H), self.W) + self.b
                S_node = torch.relu(AHW_b)
                N_idx = torch.argmax(S_node, dim=0)
                mask_n_idx = torch.zeros(self.E, N_idx.shape[0]).scatter_(0, N_idx.unsqueeze(0), 1.)
                V_graph = torch.mul(mask_n_idx, H)
                logit = torch

Figure 6: Graph Topology Max Pooling Model

The Main_Model class is a PyTorch module that represents a batch-processing model for graph classification provided in Figure 7. It initializes a GraphTopologyMaxPooling_Model for each graph in the batch. In the forward method, it iterates over the batch, applies the max pooling model to each graph’s input sequence and adjacency matrix to get a logit, and appends the logit to a list. It then stacks the logits into a tensor along the batch dimension to get a logit vector, which is returned. This model can be used for batch processing of graph classification tasks.

class Main_Model(nn.Module):

    def __init__(self, nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len, batch_size):
            super().__init__()
            self.batch_size = batch_size
            self.maxPooling_Model = self._create_MaxPooling_Model(nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len)

    def _create_MaxPooling_Model(self, nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len):
            return GraphTopologyMaxPooling_Model(nodes_Dim, node_Features_Dim, hidden_dim, num_layers, dropout_prob, Seq_len)

    def forward(self, input_seq_batch, adjacency_matrix_batch):
            logits = []
            if input_seq_batch.shape[0] == self.batch_size:
                    assert input_seq_batch.shape[0] == self.batch_size
                    assert adjacency_matrix_batch.shape[0] == self.batch_size
            else:
                    self.batch_size = input_seq_batch.shape[0]

            for i in range(self.batch_size):
                    logit = self.maxPooling_Model(input_seq_batch[i], adjacency_matrix_batch[i])
                    logits.append(logit)
            logit_vector = torch.stack(logits, dim=0)

            return logit_vector

Figure 7: Batch Processing model

References

Zhang, Z., Meng, Q., Jin, L., Wang, H., & Hou, H. (2024). A novel EEG-based graph convolution network for depression detection: incorporating secondary subject partitioning and attention mechanism. Expert Systems with Applications, 239, 122356.

Wang, H. G., Meng, Q. H., Jin, L. C., & Hou, H. R. (2023). AMGCN-L: an adaptive multi-time-window graph convolutional network with long-short-term memory for depression detection. Journal of Neural Engineering, 20(5), 056038.

Garg, S., Shukla, U. P., & Cenkeramaddi, L. R. (2023). Detection of Depression Using Weighted Spectral Graph Clustering With EEG Biomarkers. IEEE Access, 11, 57880-57894.

Ning, Z., Hu, H., Yi, L., Qie, Z., Tolba, A., & Wang, X. (2024). A Depression Detection Auxiliary Decision System Based On Multi-Modal Feature-Level Fusion of EEG and Speech. IEEE Transactions on Consumer Electronics.

Other Related Product Links

Python code for Depression Detection Using Advanced Graphical Deep Learning-Part1 (Signal Processing)

Python Code for Depression Detection Using Advanced Graphical Deep Learning – Part 3 (Features Extraction)

Python Code for Depression Detection Using Advanced Graphical Deep Learning – Part 4 (Graph Topology)

abhishek gupta

ScholarsColab.com is an innovative and first of its kind platform created by Vidhilekha Soft Solutions Pvt Ltd, a Startup recognized by the Department For Promotion Of Industry And Internal Trade, Ministry of Commerce and Industry, Government of India recognised innovative research startup.

8 reviews for EEG Sensors Relational Analysis and Adjacency Matrix Generation

  1. 3166531796

  2. karthik.p (verified owner)

    This will be useful for my UG students to work on the ideas

  3. karthik.p (verified owner)

    yes

  4. karthik.p (verified owner)

    good

  5. karthik.p (verified owner)

    Good

  6. farha jabin.oyshee (verified owner)

    good

  7. amudha.j (verified owner)

    Good

  8. amudha.j (verified owner)

    Yet to check the code

Only logged in customers who have purchased this product may leave a review.

No more offers for this product!