国产精品天干天干,亚洲毛片在线,日韩gay小鲜肉啪啪18禁,女同Gay自慰喷水

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊

[GNN]This is a test for new Bing

2023-03-06 15:02 作者:北京二環(huán)幼兒園扛把子  | 我要投稿

Hello everyone, thank you for joining me today. I'm going to talk about graph neural networks, or GNNs for short. GNNs are a powerful and versatile class of deep learning models that can handle graph-structured data. Graphs are everywhere in our world, from social networks to molecular structures, from transportation systems to knowledge bases. How can we use deep learning to learn from graphs and make predictions or recommendations based on them?


In this talk, I will give you a gentle introduction to GNNs, covering some basic concepts and applications. I will also show you some interactive examples and demos that illustrate how GNNs work and what they can do.


Let's start with some definitions. What is a graph? A graph is a data structure that consists of nodes and edges. Nodes represent entities or objects, such as people, places, or things. Edges represent relationships or interactions between nodes, such as friendship, distance, or similarity. A graph can also have node features and edge features that describe the attributes of nodes and edges.


For example, this is a graph of a social network:


![social network graph](https://distill.pub/2021/gnn-intro/img/social_network.svg)


The nodes are people with names and genders as node features. The edges are friendships with weights as edge features.


A graph can be represented mathematically as G = (V,E,U), where V is the set of nodes (or vertices), E is the set of edges (or links), and U is the global attribute of the graph (such as size or density).


Now that we know what a graph is, what is a neural network? A neural network is a computational model that consists of layers of neurons (or units) that perform nonlinear transformations on input data. A neural network can learn from data by adjusting its parameters (or weights) through an optimization process called backpropagation.


For example, this is a neural network for image classification:


![neural network for image classification](https://distill.pub/2021/gnn-intro/img/neural_network.svg)


The input layer takes an image as input and converts it into a vector of pixel values. The hidden layers apply nonlinear functions such as ReLU or sigmoid on the input vector and produce intermediate representations. The output layer applies a softmax function on the final representation and produces a probability distribution over different classes.


A neural network can be represented mathematically as f(x) = W_n * g(W_(n-1) * ... * g(W_1 * x)), where x is the input vector, W_i are the weight matrices for each layer i , g(.) are the activation functions for each layer i , f(x) is the output vector.


So far so good? Now let's see how we can combine graphs and neural networks to create GNNs.


A GNN is a neural network that operates on graphs. It takes a graph as input and produces another graph as output. The output graph can have updated node features,

edge features or global attribute. The output graph can also have different structure from the input graph, such as adding or removing nodes or edges.


For example, this is a GNN for node classification:


![GNN for node classification](https://distill.pub/2021/gnn-intro/img/node_classification.svg)


The input graph is a citation network of scientific papers with topics and citations as node features and edge features. The output graph is a new graph with updated node features that represent the learned embeddings of each paper. The output graph can be used to predict the labels of each paper based on its embedding.


A GNN can be represented mathematically as G' = (V',E',U') = f(G) = f(V,E,U), where G' is the output graph, f(.) is the GNN function that updates each component of the input graph.


How does a GNN function work? A GNN function consists of two main steps: message passing and aggregation. Message passing is the process of sending information along edges from one node to another. Aggregation is the process of combining information from multiple nodes into one node.


For example, this is a message passing step:


![message passing step](https://distill.pub/2021/gnn-intro/img/message_passing.svg)


Each node sends a message to its neighbors along edges. The message can be its own feature vector or a function of its feature vector and edge feature vector. Each node receives messages from its neighbors and stores them in a buffer.


This is an aggregation step:


![aggregation step](https://distill.pub/2021/gnn-intro/img/aggregation.svg)


Each node aggregates all the messages in its buffer using an aggregation function such as sum, mean, max, or attention. The aggregated message becomes the new feature vector for that node.


A GNN function can be represented mathematically as V'_i = g(E_i,V_i,U), where V'_i is the new feature vector for node i , E_i is the set of edges connected to node i , V_i is the feature vector for node i , U is the global attribute of the graph, g(.) is a function that combines edge features, node features, and global attribute.


As you can see, a GNN function updates each node feature based on its local neighborhood information. This allows a GNN to capture both local and global patterns in a graph. A GNN can also be stacked with multiple layers to increase its expressive power and learn higher-order features.


For example, this is a two-layer GNN:


![two-layer GNN](https://distill.pub/2021/gnn-intro/img/two_layer_gnn.svg)


The first layer performs message passing and aggregation on the input graph and produces an intermediate graph. The second layer performs message passing and aggregation on the intermediate graph and produces an output graph.


A two-layer GNN can be represented mathematically as G' = f_2(f_1(G)), where f_1(.) and f_2(.) are two different GNN functions.


There are many variants of GNNs that differ in how they define their message passing functions, aggregation functions, or output functions. Some examples are Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), GraphSAGE, Graph Isomorphism Networks (GIN), etc. Each variant has its own advantages and disadvantages depending on the task and data domain.


GNNs have been applied to various domains such as natural language processing, computer vision, social network analysis, recommender systems, bioinformatics, etc. GNNs have shown superior performance over traditional methods that rely on hand-crafted features or graph kernels.


In summary, GNNs are a powerful class of neural networks that can operate on graph-structured data. They can learn both local and global patterns in a graph by updating each node feature based on its neighborhood information. They can also be stacked with multiple layers to increase their expressive power and learn higher-order features. GNNs have been applied to various domains and tasks that involve graph-structured data.


Thank you for your attention. I hope this talk has given you a gentle introduction to GNNs and inspired you to explore more about this fascinating topic.





[GNN]This is a test for new Bing的評論 (共 條)

分享到微博請遵守國家法律
永川市| 葵青区| 宁陵县| 廊坊市| 阿城市| 慈溪市| 南通市| 鄱阳县| 定陶县| 蓝山县| 大关县| 万载县| 许昌县| 益阳市| 竹山县| 乌审旗| 丹东市| 宣化县| 曲周县| 彩票| 镇原县| 洛南县| 宜春市| 星子县| 青州市| 福建省| 蛟河市| 天祝| 镇康县| 阿尔山市| 彝良县| 靖边县| 手机| 于田县| 旺苍县| 石家庄市| 古浪县| 丹江口市| 锡林郭勒盟| 绿春县| 嘉峪关市|