Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
defbaff2
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
defbaff2
编写于
5月 24, 2018
作者:
K
Kexin Zhao
提交者:
GitHub
5月 24, 2018
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #10886 from kexinzhao/label_semantic_roles_lod
Simplify label_semantic_roles book example using new LoDTensor API
上级
b3f650d1
6133728a
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
58 addition
and
58 deletion
+58
-58
python/paddle/fluid/tests/book/high-level-api/label_semantic_roles/test_label_semantic_roles_newapi.py
.../label_semantic_roles/test_label_semantic_roles_newapi.py
+29
-18
python/paddle/fluid/tests/book/test_label_semantic_roles.py
python/paddle/fluid/tests/book/test_label_semantic_roles.py
+29
-40
未找到文件。
python/paddle/fluid/tests/book/high-level-api/label_semantic_roles/test_label_semantic_roles_newapi.py
浏览文件 @
defbaff2
...
...
@@ -202,24 +202,35 @@ def infer(use_cuda, inference_program, save_path):
inferencer
=
fluid
.
Inferencer
(
inference_program
,
param_path
=
save_path
,
place
=
place
)
def
create_random_lodtensor
(
lod
,
place
,
low
,
high
):
data
=
np
.
random
.
random_integers
(
low
,
high
,
[
lod
[
-
1
],
1
]).
astype
(
"int64"
)
res
=
fluid
.
LoDTensor
()
res
.
set
(
data
,
place
)
res
.
set_lod
([
lod
])
return
res
# Create an input example
lod
=
[
0
,
4
,
10
]
word
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
pred
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
PRED_DICT_LEN
-
1
)
ctx_n2
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_n1
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_0
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_p1
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_p2
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
mark
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
MARK_DICT_LEN
-
1
)
# Setup inputs by creating LoDTensors to represent sequences of words.
# Here each word is the basic element of these LoDTensors and the shape of
# each word (base_shape) should be [1] since it is simply an index to
# look up for the corresponding word vector.
# Suppose the length_based level of detail (lod) info is set to [[3, 4, 2]],
# which has only one lod level. Then the created LoDTensors will have only
# one higher level structure (sequence of words, or sentence) than the basic
# element (word). Hence the LoDTensor will hold data for three sentences of
# length 3, 4 and 2, respectively.
# Note that lod info should be a list of lists.
lod
=
[[
3
,
4
,
2
]]
base_shape
=
[
1
]
# The range of random integers is [low, high]
word
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
pred
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
PRED_DICT_LEN
-
1
)
ctx_n2
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_n1
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_0
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_p1
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
ctx_p2
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
WORD_DICT_LEN
-
1
)
mark
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
MARK_DICT_LEN
-
1
)
results
=
inferencer
.
infer
(
{
...
...
python/paddle/fluid/tests/book/test_label_semantic_roles.py
浏览文件 @
defbaff2
...
...
@@ -116,29 +116,6 @@ def db_lstm(word, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark,
return
feature_out
def
to_lodtensor
(
data
,
place
):
seq_lens
=
[
len
(
seq
)
for
seq
in
data
]
cur_len
=
0
lod
=
[
cur_len
]
for
l
in
seq_lens
:
cur_len
+=
l
lod
.
append
(
cur_len
)
flattened_data
=
np
.
concatenate
(
data
,
axis
=
0
).
astype
(
"int64"
)
flattened_data
=
flattened_data
.
reshape
([
len
(
flattened_data
),
1
])
res
=
fluid
.
LoDTensor
()
res
.
set
(
flattened_data
,
place
)
res
.
set_lod
([
lod
])
return
res
def
create_random_lodtensor
(
lod
,
place
,
low
,
high
):
data
=
np
.
random
.
random_integers
(
low
,
high
,
[
lod
[
-
1
],
1
]).
astype
(
"int64"
)
res
=
fluid
.
LoDTensor
()
res
.
set
(
data
,
place
)
res
.
set_lod
([
lod
])
return
res
def
train
(
use_cuda
,
save_dirname
=
None
,
is_local
=
True
):
# define network topology
word
=
fluid
.
layers
.
data
(
...
...
@@ -271,23 +248,35 @@ def infer(use_cuda, save_dirname=None):
[
inference_program
,
feed_target_names
,
fetch_targets
]
=
fluid
.
io
.
load_inference_model
(
save_dirname
,
exe
)
lod
=
[
0
,
4
,
10
]
word
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
pred
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
pred_dict_len
-
1
)
ctx_n2
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_n1
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_0
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_p1
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_p2
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
mark
=
create_random_lodtensor
(
lod
,
place
,
low
=
0
,
high
=
mark_dict_len
-
1
)
# Setup inputs by creating LoDTensors to represent sequences of words.
# Here each word is the basic element of these LoDTensors and the shape of
# each word (base_shape) should be [1] since it is simply an index to
# look up for the corresponding word vector.
# Suppose the length_based level of detail (lod) info is set to [[3, 4, 2]],
# which has only one lod level. Then the created LoDTensors will have only
# one higher level structure (sequence of words, or sentence) than the basic
# element (word). Hence the LoDTensor will hold data for three sentences of
# length 3, 4 and 2, respectively.
# Note that lod info should be a list of lists.
lod
=
[[
3
,
4
,
2
]]
base_shape
=
[
1
]
# The range of random integers is [low, high]
word
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
pred
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
pred_dict_len
-
1
)
ctx_n2
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_n1
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_0
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_p1
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
ctx_p2
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
word_dict_len
-
1
)
mark
=
fluid
.
create_random_int_lodtensor
(
lod
,
base_shape
,
place
,
low
=
0
,
high
=
mark_dict_len
-
1
)
# Construct feed as a dictionary of {feed_target_name: feed_target_data}
# and results will contain a list of data corresponding to fetch_targets.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录