A Graph Neural Network (GNN) is a class of deep learning models designed to perform inference directly on data structured as graphs, where entities are represented as nodes and their relationships as edges. The core operational mechanism is message passing (or neural message passing), where each node iteratively aggregates feature vectors from its neighboring nodes, combines this information with its own features, and updates its representation. This process allows node-level, edge-level, and graph-level representations to capture both the local graph topology and the features of connected elements. After several rounds of message passing, the refined node embeddings can be used for tasks like node classification, link prediction, or graph classification.
Key Steps in a GNN Forward Pass:
- Initialization: Each node
v starts with a feature vector h_v^(0).
- Message Function: For each edge, a message
m_{uv} is computed from the sender node u to the receiver node v, often as a function of their current states: m_{uv} = M(h_u, h_v, e_{uv}), where e_{uv} is an optional edge feature.
- Aggregation: Node
v aggregates all incoming messages from its neighborhood N(v), e.g., using a sum, mean, or max operation: a_v = AGG({m_{uv}, u ∈ N(v)}).
- Update Function: The node updates its own state by combining its previous state with the aggregated message:
h_v^(k+1) = UPDATE(h_v^(k), a_v).
- Readout (Optional): For graph-level tasks, a readout function pools all final node representations into a single graph-level vector.