Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PGL
提交
60db1b4b
P
PGL
项目概览
PaddlePaddle
/
PGL
通知
76
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
11
列表
看板
标记
里程碑
合并请求
1
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PGL
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
11
Issue
11
列表
看板
标记
里程碑
合并请求
1
合并请求
1
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
60db1b4b
编写于
6月 23, 2020
作者:
F
fengshikun01
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix deeper_gcn
上级
fb2940a6
变更
3
显示空白变更内容
内联
并排
Showing
3 changed file
with
18 addition
and
9 deletion
+18
-9
examples/deeper_gcn/README.md
examples/deeper_gcn/README.md
+8
-0
examples/deeper_gcn/train.py
examples/deeper_gcn/train.py
+3
-3
pgl/message_passing.py
pgl/message_passing.py
+7
-6
未找到文件。
examples/deeper_gcn/README.md
浏览文件 @
60db1b4b
...
@@ -12,6 +12,14 @@ The datasets contain three citation networks: CORA, PUBMED, CITESEER. The detail
...
@@ -12,6 +12,14 @@ The datasets contain three citation networks: CORA, PUBMED, CITESEER. The detail
-
paddlepaddle>=1.6
-
paddlepaddle>=1.6
-
pgl
-
pgl
### Performance
We train our models for 200 epochs and report the accuracy on the test dataset.
| Dataset | Accuracy |
| --- | --- |
| Cora | ~77% |
### How to run
### How to run
For examples, use gpu to train gat on cora dataset.
For examples, use gpu to train gat on cora dataset.
...
...
examples/deeper_gcn/train.py
浏览文件 @
60db1b4b
...
@@ -44,7 +44,7 @@ def main(args):
...
@@ -44,7 +44,7 @@ def main(args):
startup_program
=
fluid
.
Program
()
startup_program
=
fluid
.
Program
()
test_program
=
fluid
.
Program
()
test_program
=
fluid
.
Program
()
hidden_size
=
64
hidden_size
=
64
num_layers
=
50
num_layers
=
7
with
fluid
.
program_guard
(
train_program
,
startup_program
):
with
fluid
.
program_guard
(
train_program
,
startup_program
):
gw
=
pgl
.
graph_wrapper
.
GraphWrapper
(
gw
=
pgl
.
graph_wrapper
.
GraphWrapper
(
...
@@ -103,7 +103,7 @@ def main(args):
...
@@ -103,7 +103,7 @@ def main(args):
# get beta param
# get beta param
beta_param_list
=
[]
beta_param_list
=
[]
for
param
in
train_program
.
global_block
().
all_parameters
(
):
for
param
in
fluid
.
io
.
get_program_parameter
(
train_program
):
if
param
.
name
.
endswith
(
"_beta"
):
if
param
.
name
.
endswith
(
"_beta"
):
beta_param_list
.
append
(
param
)
beta_param_list
.
append
(
param
)
...
@@ -119,7 +119,7 @@ def main(args):
...
@@ -119,7 +119,7 @@ def main(args):
return_numpy
=
True
)
return_numpy
=
True
)
for
param
in
beta_param_list
:
for
param
in
beta_param_list
:
beta
=
np
.
array
(
fluid
.
global_scope
().
find_var
(
param
.
name
).
get_tensor
())
beta
=
np
.
array
(
fluid
.
global_scope
().
find_var
(
param
.
name
).
get_tensor
())
writer
.
add_scalar
(
param
.
name
,
beta
,
epoch
)
writer
.
add_scalar
(
"beta/"
+
param
.
name
,
beta
,
epoch
)
if
epoch
>=
3
:
if
epoch
>=
3
:
time_per_epoch
=
1.0
*
(
time
.
time
()
-
t0
)
time_per_epoch
=
1.0
*
(
time
.
time
()
-
t0
)
...
...
pgl/message_passing.py
浏览文件 @
60db1b4b
...
@@ -50,13 +50,14 @@ def max_recv(feat):
...
@@ -50,13 +50,14 @@ def max_recv(feat):
return
fluid
.
layers
.
sequence_pool
(
feat
,
pool_type
=
"max"
)
return
fluid
.
layers
.
sequence_pool
(
feat
,
pool_type
=
"max"
)
def
lstm_recv
(
feat
):
def
lstm_recv
(
hidden_dim
):
"""doc"""
"""doc"""
hidden_dim
=
128
def
lstm_recv_inside
(
feat
):
forward
,
_
=
fluid
.
layers
.
dynamic_lstm
(
forward
,
_
=
fluid
.
layers
.
dynamic_lstm
(
input
=
feat
,
size
=
hidden_dim
*
4
,
use_peepholes
=
False
)
input
=
feat
,
size
=
hidden_dim
*
4
,
use_peepholes
=
False
)
output
=
fluid
.
layers
.
sequence_last_step
(
forward
)
output
=
fluid
.
layers
.
sequence_last_step
(
forward
)
return
output
return
output
return
lstm_recv_inside
def
graphsage_sum
(
gw
,
feature
,
hidden_size
,
act
,
initializer
,
learning_rate
,
name
):
def
graphsage_sum
(
gw
,
feature
,
hidden_size
,
act
,
initializer
,
learning_rate
,
name
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录