Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PGL
提交
438b3f4c
P
PGL
项目概览
PaddlePaddle
/
PGL
通知
76
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
11
列表
看板
标记
里程碑
合并请求
1
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PGL
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
11
Issue
11
列表
看板
标记
里程碑
合并请求
1
合并请求
1
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
438b3f4c
编写于
7月 30, 2020
作者:
Y
Yelrose
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add gcnii
上级
b809f488
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
119 addition
and
1 deletion
+119
-1
examples/citation_benchmark/config/appnp.yaml
examples/citation_benchmark/config/appnp.yaml
+1
-1
examples/citation_benchmark/config/gcnii.yaml
examples/citation_benchmark/config/gcnii.yaml
+9
-0
examples/citation_benchmark/model.py
examples/citation_benchmark/model.py
+39
-0
pgl/layers/conv.py
pgl/layers/conv.py
+70
-0
未找到文件。
examples/citation_benchmark/config/appnp.yaml
浏览文件 @
438b3f4c
model_name
:
APPNP
k_hop
:
10
alpha
:
0.1
num_layer
2
:
1
num_layer
:
1
learning_rate
:
0.01
dropout
:
0.5
hidden_size
:
64
...
...
examples/citation_benchmark/config/gcnii.yaml
0 → 100644
浏览文件 @
438b3f4c
model_name
:
GCNII
k_hop
:
64
alpha
:
0.1
num_layer
:
1
learning_rate
:
0.01
dropout
:
0.6
hidden_size
:
64
weight_decay
:
0.0005
edge_dropout
:
0.0
examples/citation_benchmark/model.py
浏览文件 @
438b3f4c
...
...
@@ -154,3 +154,42 @@ class SGC(object):
feature
=
L
.
fc
(
feature
,
self
.
num_class
,
act
=
None
,
bias_attr
=
False
,
name
=
"output"
)
return
feature
class
GCNII
(
object
):
"""Implement of GCNII"""
def
__init__
(
self
,
config
,
num_class
):
self
.
num_class
=
num_class
self
.
num_layers
=
config
.
get
(
"num_layers"
,
1
)
self
.
hidden_size
=
config
.
get
(
"hidden_size"
,
64
)
self
.
dropout
=
config
.
get
(
"dropout"
,
0.6
)
self
.
alpha
=
config
.
get
(
"alpha"
,
0.1
)
self
.
lambda_l
=
config
.
get
(
"lambda_l"
,
0.5
)
self
.
k_hop
=
config
.
get
(
"k_hop"
,
64
)
self
.
edge_dropout
=
config
.
get
(
"edge_dropout"
,
0.0
)
def
forward
(
self
,
graph_wrapper
,
feature
,
phase
):
if
phase
==
"train"
:
edge_dropout
=
0
else
:
edge_dropout
=
self
.
edge_dropout
for
i
in
range
(
self
.
num_layers
):
feature
=
L
.
fc
(
feature
,
self
.
hidden_size
,
act
=
"relu"
,
name
=
"lin%s"
%
i
)
feature
=
L
.
dropout
(
feature
,
self
.
dropout
,
dropout_implementation
=
'upscale_in_train'
)
feature
=
conv
.
gcnii
(
graph_wrapper
,
feature
=
feature
,
name
=
"gcnii"
,
activation
=
"relu"
,
lambda_l
=
self
.
lambda_l
,
alpha
=
self
.
alpha
,
dropout
=
self
.
dropout
,
k_hop
=
self
.
k_hop
)
feature
=
L
.
fc
(
feature
,
self
.
num_class
,
act
=
None
,
name
=
"output"
)
return
feature
pgl/layers/conv.py
浏览文件 @
438b3f4c
...
...
@@ -19,6 +19,7 @@ import paddle.fluid as fluid
import
paddle.fluid.layers
as
L
from
pgl.utils
import
paddle_helper
from
pgl
import
message_passing
import
numpy
as
np
__all__
=
[
'gcn'
,
'gat'
,
'gin'
,
'gaan'
,
'gen_conv'
,
'appnp'
]
...
...
@@ -413,6 +414,7 @@ def get_norm(indegree):
norm
=
L
.
pow
(
float_degree
,
factor
=-
0.5
)
return
norm
def
appnp
(
gw
,
feature
,
edge_dropout
=
0
,
alpha
=
0.2
,
k_hop
=
10
):
"""Implementation of APPNP of "Predict then Propagate: Graph Neural Networks
meet Personalized PageRank" (ICLR 2019).
...
...
@@ -453,3 +455,71 @@ def appnp(gw, feature, edge_dropout=0, alpha=0.2, k_hop=10):
feature
=
feature
*
(
1
-
alpha
)
+
h0
*
alpha
return
feature
def
gcnii
(
gw
,
feature
,
name
,
activation
=
None
,
alpha
=
0.5
,
lambda_l
=
0.5
,
k_hop
=
1
,
dropout
=
0.5
,
is_test
=
False
):
"""Implementation of GCNII of "Simple and Deep Graph Convolutional Networks"
paper: https://arxiv.org/pdf/2007.02133.pdf
Args:
gw: Graph wrapper object (:code:`StaticGraphWrapper` or :code:`GraphWrapper`)
feature: A tensor with shape (num_nodes, feature_size).
activation: The activation for the output.
k_hop: Number of layers for gcnii.
lambda_l: The hyperparameter of lambda in the paper.
alpha: The hyperparameter of alpha in the paper.
dropout: Feature dropout rate.
is_test: train / test phase.
Return:
A tensor with shape (num_nodes, hidden_size)
"""
def
send_src_copy
(
src_feat
,
dst_feat
,
edge_feat
):
feature
=
src_feat
[
"h"
]
return
feature
h0
=
feature
ngw
=
gw
norm
=
get_norm
(
ngw
.
indegree
())
hidden_size
=
feature
.
shape
[
-
1
]
for
i
in
range
(
k_hop
):
beta_i
=
np
.
log
(
1.0
*
lambda_l
/
(
i
+
1
)
+
1
)
feature
=
L
.
dropout
(
feature
,
dropout_prob
=
dropout
,
is_test
=
is_test
,
dropout_implementation
=
'upscale_in_train'
)
feature
=
feature
*
norm
msg
=
gw
.
send
(
send_src_copy
,
nfeat_list
=
[(
"h"
,
feature
)])
feature
=
gw
.
recv
(
msg
,
"sum"
)
feature
=
feature
*
norm
# appnp
feature
=
feature
*
(
1
-
alpha
)
+
h0
*
alpha
feature_transed
=
L
.
fc
(
feature
,
hidden_size
,
act
=
None
,
bias_attr
=
False
,
name
=
name
+
"_%s_w1"
%
i
)
feature
=
feature_transed
*
beta_i
+
feature
*
(
1
-
beta_i
)
if
activation
is
not
None
:
feature
=
getattr
(
L
,
activation
)(
feature
)
return
feature
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录