Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
ERNIE
提交
3aeb517a
E
ERNIE
项目概览
PaddlePaddle
/
ERNIE
大约 1 年 前同步成功
通知
109
Star
5997
Fork
1270
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
29
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
E
ERNIE
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
29
Issue
29
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
3aeb517a
编写于
3月 19, 2019
作者:
C
cclauss
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fixes for Python 3
上级
b255e121
变更
7
隐藏空白更改
内联
并排
Showing
7 changed file
with
17 addition
and
9 deletion
+17
-9
BERT/convert_params.py
BERT/convert_params.py
+1
-1
BERT/model/bert.py
BERT/model/bert.py
+3
-3
BERT/run_squad.py
BERT/run_squad.py
+4
-2
ERNIE/batching.py
ERNIE/batching.py
+2
-0
ERNIE/finetune/classifier.py
ERNIE/finetune/classifier.py
+2
-0
ERNIE/finetune/sequence_label.py
ERNIE/finetune/sequence_label.py
+2
-0
ERNIE/model/ernie.py
ERNIE/model/ernie.py
+3
-3
未找到文件。
BERT/convert_params.py
浏览文件 @
3aeb517a
...
...
@@ -137,7 +137,7 @@ def parse(init_checkpoint):
else
:
print
(
"ignored param: %s"
%
var_name
)
if
fluid_param_name
is
not
''
:
if
fluid_param_name
!=
''
:
tf_fluid_param_name_map
[
var_name
]
=
fluid_param_name
tf_param_name_shape_map
[
var_name
]
=
var_shape
fluid_param_name
=
''
...
...
BERT/model/bert.py
浏览文件 @
3aeb517a
...
...
@@ -73,7 +73,7 @@ class BertModel(object):
self
.
_sent_emb_name
=
"sent_embedding"
self
.
_dtype
=
"float16"
if
use_fp16
else
"float32"
# Initialize all weigths by truncated normal initializer, and all biases
# Initialize all weigths by truncated normal initializer, and all biases
# will be initialized by constant zero by default.
self
.
_param_initializer
=
fluid
.
initializer
.
TruncatedNormal
(
scale
=
config
[
'initializer_range'
])
...
...
@@ -109,7 +109,7 @@ class BertModel(object):
emb_out
=
pre_process_layer
(
emb_out
,
'nd'
,
self
.
_prepostprocess_dropout
,
name
=
'pre_encoder'
)
if
self
.
_dtype
is
"float16"
:
if
self
.
_dtype
==
"float16"
:
self_attn_mask
=
fluid
.
layers
.
cast
(
x
=
self_attn_mask
,
dtype
=
self
.
_dtype
)
...
...
@@ -175,7 +175,7 @@ class BertModel(object):
name
=
'mask_lm_trans_fc.w_0'
,
initializer
=
self
.
_param_initializer
),
bias_attr
=
fluid
.
ParamAttr
(
name
=
'mask_lm_trans_fc.b_0'
))
# transform: layer norm
# transform: layer norm
mask_trans_feat
=
pre_process_layer
(
mask_trans_feat
,
'n'
,
name
=
'mask_lm_trans'
)
...
...
BERT/run_squad.py
浏览文件 @
3aeb517a
...
...
@@ -17,11 +17,13 @@ from __future__ import absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
argparse
import
collections
import
multiprocessing
import
os
import
time
import
argparse
import
numpy
as
np
import
collections
import
paddle
import
paddle.fluid
as
fluid
...
...
ERNIE/batching.py
浏览文件 @
3aeb517a
...
...
@@ -19,6 +19,8 @@ from __future__ import print_function
import
numpy
as
np
from
six.moves
import
xrange
def
mask
(
batch_tokens
,
seg_labels
,
...
...
ERNIE/finetune/classifier.py
浏览文件 @
3aeb517a
...
...
@@ -22,6 +22,8 @@ import numpy as np
import
paddle.fluid
as
fluid
from
six.moves
import
xrange
from
model.ernie
import
ErnieModel
...
...
ERNIE/finetune/sequence_label.py
浏览文件 @
3aeb517a
...
...
@@ -25,6 +25,8 @@ import multiprocessing
import
paddle
import
paddle.fluid
as
fluid
from
six.moves
import
xrange
from
model.ernie
import
ErnieModel
def
create_model
(
args
,
...
...
ERNIE/model/ernie.py
浏览文件 @
3aeb517a
...
...
@@ -73,7 +73,7 @@ class ErnieModel(object):
self
.
_sent_emb_name
=
"sent_embedding"
self
.
_dtype
=
"float16"
if
use_fp16
else
"float32"
# Initialize all weigths by truncated normal initializer, and all biases
# Initialize all weigths by truncated normal initializer, and all biases
# will be initialized by constant zero by default.
self
.
_param_initializer
=
fluid
.
initializer
.
TruncatedNormal
(
scale
=
config
[
'initializer_range'
])
...
...
@@ -109,7 +109,7 @@ class ErnieModel(object):
emb_out
=
pre_process_layer
(
emb_out
,
'nd'
,
self
.
_prepostprocess_dropout
,
name
=
'pre_encoder'
)
if
self
.
_dtype
is
"float16"
:
if
self
.
_dtype
==
"float16"
:
self_attn_mask
=
fluid
.
layers
.
cast
(
x
=
self_attn_mask
,
dtype
=
self
.
_dtype
)
...
...
@@ -175,7 +175,7 @@ class ErnieModel(object):
name
=
'mask_lm_trans_fc.w_0'
,
initializer
=
self
.
_param_initializer
),
bias_attr
=
fluid
.
ParamAttr
(
name
=
'mask_lm_trans_fc.b_0'
))
# transform: layer norm
# transform: layer norm
mask_trans_feat
=
pre_process_layer
(
mask_trans_feat
,
'n'
,
name
=
'mask_lm_trans'
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录