Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
ERNIE
提交
17c3661e
E
ERNIE
项目概览
PaddlePaddle
/
ERNIE
大约 1 年 前同步成功
通知
109
Star
5997
Fork
1270
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
29
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
E
ERNIE
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
29
Issue
29
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
17c3661e
编写于
10月 28, 2019
作者:
C
chenxuyi
提交者:
Meiyim
1月 16, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
paddle 1.6 compat
上级
da04e0b4
变更
5
显示空白变更内容
内联
并排
Showing
5 changed file
with
8 addition
and
8 deletion
+8
-8
ernie/batching.py
ernie/batching.py
+1
-1
ernie/ernie_encoder.py
ernie/ernie_encoder.py
+1
-1
ernie/finetune/sequence_label.py
ernie/finetune/sequence_label.py
+1
-1
ernie/model/ernie.py
ernie/model/ernie.py
+3
-3
ernie/model/ernie_v1.py
ernie/model/ernie_v1.py
+2
-2
未找到文件。
ernie/batching.py
浏览文件 @
17c3661e
...
...
@@ -208,7 +208,7 @@ def pad_batch_data(insts,
if
return_seq_lens
:
seq_lens
=
np
.
array
([
len
(
inst
)
for
inst
in
insts
])
return_list
+=
[
seq_lens
.
astype
(
"int64"
).
reshape
([
-
1
,
1
])]
return_list
+=
[
seq_lens
.
astype
(
"int64"
).
reshape
([
-
1
])]
return
return_list
if
len
(
return_list
)
>
1
else
return_list
[
0
]
...
...
ernie/ernie_encoder.py
浏览文件 @
17c3661e
...
...
@@ -56,7 +56,7 @@ def create_model(args, pyreader_name, ernie_config):
capacity
=
50
,
shapes
=
[[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
1
]],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
]],
dtypes
=
[
'int64'
,
'int64'
,
'int64'
,
'int64'
,
'float'
,
'int64'
],
lod_levels
=
[
0
,
0
,
0
,
0
,
0
,
0
],
name
=
pyreader_name
,
...
...
ernie/finetune/sequence_label.py
浏览文件 @
17c3661e
...
...
@@ -40,7 +40,7 @@ def create_model(args, pyreader_name, ernie_config, is_prediction=False):
capacity
=
50
,
shapes
=
[[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
1
]],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
,
args
.
max_seq_len
,
1
],
[
-
1
]],
dtypes
=
[
'int64'
,
'int64'
,
'int64'
,
'int64'
,
'float32'
,
'int64'
,
'int64'
],
...
...
ernie/model/ernie.py
浏览文件 @
17c3661e
...
...
@@ -86,7 +86,7 @@ class ErnieModel(object):
self
.
_sent_emb_name
=
"sent_embedding"
self
.
_task_emb_name
=
"task_embedding"
self
.
_dtype
=
"float16"
if
use_fp16
else
"float32"
self
.
_emb_dtype
=
"float32"
self
.
_emb_dtype
=
'float32'
# Initialize all weigths by truncated normal initializer, and all biases
# will be initialized by constant zero by default.
...
...
@@ -138,7 +138,7 @@ class ErnieModel(object):
emb_out
=
pre_process_layer
(
emb_out
,
'nd'
,
self
.
_prepostprocess_dropout
,
name
=
'pre_encoder'
)
if
self
.
_dtype
==
"float16"
:
if
self
.
_dtype
==
'float16'
:
emb_out
=
fluid
.
layers
.
cast
(
x
=
emb_out
,
dtype
=
self
.
_dtype
)
input_mask
=
fluid
.
layers
.
cast
(
x
=
input_mask
,
dtype
=
self
.
_dtype
)
self_attn_mask
=
fluid
.
layers
.
matmul
(
...
...
@@ -167,7 +167,7 @@ class ErnieModel(object):
postprocess_cmd
=
"dan"
,
param_initializer
=
self
.
_param_initializer
,
name
=
'encoder'
)
if
self
.
_dtype
==
"float16"
:
if
self
.
_dtype
==
'float16'
:
self
.
_enc_out
=
fluid
.
layers
.
cast
(
x
=
self
.
_enc_out
,
dtype
=
self
.
_emb_dtype
)
...
...
ernie/model/ernie_v1.py
浏览文件 @
17c3661e
...
...
@@ -76,7 +76,7 @@ class ErnieModel(object):
self
.
_word_emb_name
=
"word_embedding"
self
.
_pos_emb_name
=
"pos_embedding"
self
.
_sent_emb_name
=
"sent_embedding"
self
.
_dtype
=
"float16"
if
use_fp16
else
"float32"
self
.
_dtype
=
'float16'
if
use_fp16
else
'float32'
# Initialize all weigths by truncated normal initializer, and all biases
# will be initialized by constant zero by default.
...
...
@@ -114,7 +114,7 @@ class ErnieModel(object):
emb_out
=
pre_process_layer
(
emb_out
,
'nd'
,
self
.
_prepostprocess_dropout
,
name
=
'pre_encoder'
)
if
self
.
_dtype
==
"float16"
:
if
self
.
_dtype
==
'float16'
:
input_mask
=
fluid
.
layers
.
cast
(
x
=
input_mask
,
dtype
=
self
.
_dtype
)
self_attn_mask
=
fluid
.
layers
.
matmul
(
x
=
input_mask
,
y
=
input_mask
,
transpose_y
=
True
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录