Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleRec
提交
641e55e8
P
PaddleRec
项目概览
PaddlePaddle
/
PaddleRec
通知
68
Star
12
Fork
5
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
27
列表
看板
标记
里程碑
合并请求
10
Wiki
1
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleRec
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
27
Issue
27
列表
看板
标记
里程碑
合并请求
10
合并请求
10
Pages
分析
分析
仓库分析
DevOps
Wiki
1
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
641e55e8
编写于
9月 04, 2020
作者:
T
tangwei12
提交者:
GitHub
9月 04, 2020
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'master' into fix_collective_files_partition
上级
1726a417
770693ab
变更
4
显示空白变更内容
内联
并排
Showing
4 changed file
with
34 addition
and
16 deletion
+34
-16
core/trainers/framework/runner.py
core/trainers/framework/runner.py
+22
-3
models/contentunderstanding/classification/readme.md
models/contentunderstanding/classification/readme.md
+6
-6
models/multitask/mmoe/config.yaml
models/multitask/mmoe/config.yaml
+4
-5
models/rank/fibinet/config.yaml
models/rank/fibinet/config.yaml
+2
-2
未找到文件。
core/trainers/framework/runner.py
浏览文件 @
641e55e8
...
@@ -18,11 +18,15 @@ import os
...
@@ -18,11 +18,15 @@ import os
import
time
import
time
import
warnings
import
warnings
import
numpy
as
np
import
numpy
as
np
import
logging
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
paddlerec.core.utils
import
envs
from
paddlerec.core.utils
import
envs
from
paddlerec.core.metric
import
Metric
from
paddlerec.core.metric
import
Metric
logging
.
basicConfig
(
format
=
'%(asctime)s - %(levelname)s: %(message)s'
,
level
=
logging
.
INFO
)
__all__
=
[
__all__
=
[
"RunnerBase"
,
"SingleRunner"
,
"PSRunner"
,
"CollectiveRunner"
,
"PslibRunner"
"RunnerBase"
,
"SingleRunner"
,
"PSRunner"
,
"CollectiveRunner"
,
"PslibRunner"
]
]
...
@@ -140,8 +144,16 @@ class RunnerBase(object):
...
@@ -140,8 +144,16 @@ class RunnerBase(object):
metrics_varnames
=
[]
metrics_varnames
=
[]
metrics_format
=
[]
metrics_format
=
[]
if
context
[
"is_infer"
]:
metrics_format
.
append
(
"
\t
[Infer]
\t
{}: {{}}"
.
format
(
"batch"
))
else
:
metrics_format
.
append
(
"
\t
[Train]
\t
{}: {{}}"
.
format
(
"batch"
))
metrics_format
.
append
(
"{}: {{:.2f}}s"
.
format
(
"time_each_interval"
))
metrics_names
=
[
"total_batch"
]
metrics_names
=
[
"total_batch"
]
metrics_format
.
append
(
"{}: {{}}"
.
format
(
"batch"
))
for
name
,
var
in
metrics
.
items
():
for
name
,
var
in
metrics
.
items
():
metrics_names
.
append
(
name
)
metrics_names
.
append
(
name
)
metrics_varnames
.
append
(
var
.
name
)
metrics_varnames
.
append
(
var
.
name
)
...
@@ -151,6 +163,7 @@ class RunnerBase(object):
...
@@ -151,6 +163,7 @@ class RunnerBase(object):
reader
=
context
[
"model"
][
model_dict
[
"name"
]][
"model"
].
_data_loader
reader
=
context
[
"model"
][
model_dict
[
"name"
]][
"model"
].
_data_loader
reader
.
start
()
reader
.
start
()
batch_id
=
0
batch_id
=
0
begin_time
=
time
.
time
()
scope
=
context
[
"model"
][
model_name
][
"scope"
]
scope
=
context
[
"model"
][
model_name
][
"scope"
]
result
=
None
result
=
None
with
fluid
.
scope_guard
(
scope
):
with
fluid
.
scope_guard
(
scope
):
...
@@ -160,8 +173,8 @@ class RunnerBase(object):
...
@@ -160,8 +173,8 @@ class RunnerBase(object):
program
=
program
,
program
=
program
,
fetch_list
=
metrics_varnames
,
fetch_list
=
metrics_varnames
,
return_numpy
=
False
)
return_numpy
=
False
)
metrics
=
[
batch_id
]
metrics
=
[
batch_id
]
metrics_rets
=
[
metrics_rets
=
[
as_numpy
(
metrics_tensor
)
as_numpy
(
metrics_tensor
)
for
metrics_tensor
in
metrics_tensors
for
metrics_tensor
in
metrics_tensors
...
@@ -169,7 +182,13 @@ class RunnerBase(object):
...
@@ -169,7 +182,13 @@ class RunnerBase(object):
metrics
.
extend
(
metrics_rets
)
metrics
.
extend
(
metrics_rets
)
if
batch_id
%
fetch_period
==
0
and
batch_id
!=
0
:
if
batch_id
%
fetch_period
==
0
and
batch_id
!=
0
:
print
(
metrics_format
.
format
(
*
metrics
))
end_time
=
time
.
time
()
seconds
=
end_time
-
begin_time
metrics_logging
=
metrics
[:]
metrics_logging
=
metrics
.
insert
(
1
,
seconds
)
begin_time
=
end_time
logging
.
info
(
metrics_format
.
format
(
*
metrics
))
batch_id
+=
1
batch_id
+=
1
except
fluid
.
core
.
EOFException
:
except
fluid
.
core
.
EOFException
:
reader
.
reset
()
reader
.
reset
()
...
...
models/contentunderstanding/classification/readme.md
浏览文件 @
641e55e8
...
@@ -44,7 +44,7 @@ Yoon Kim在论文[EMNLP 2014][Convolutional neural networks for sentence classic
...
@@ -44,7 +44,7 @@ Yoon Kim在论文[EMNLP 2014][Convolutional neural networks for sentence classic
| 模型 | dev | test |
| 模型 | dev | test |
| :------| :------ | :------
| :------| :------ | :------
| TextCNN | 90.75% | 9
2.19
% |
| TextCNN | 90.75% | 9
1.27
% |
您可以直接执行以下命令下载我们分词完毕后的数据集,文件解压之后,senta_data目录下会存在训练数据(train.tsv)、开发集数据(dev.tsv)、测试集数据(test.tsv)以及对应的词典(word_dict.txt):
您可以直接执行以下命令下载我们分词完毕后的数据集,文件解压之后,senta_data目录下会存在训练数据(train.tsv)、开发集数据(dev.tsv)、测试集数据(test.tsv)以及对应的词典(word_dict.txt):
...
...
models/multitask/mmoe/config.yaml
浏览文件 @
641e55e8
...
@@ -17,12 +17,12 @@ workspace: "models/multitask/mmoe"
...
@@ -17,12 +17,12 @@ workspace: "models/multitask/mmoe"
dataset
:
dataset
:
-
name
:
dataset_train
-
name
:
dataset_train
batch_size
:
5
batch_size
:
5
type
:
QueueDataset
type
:
DataLoader
# or
QueueDataset
data_path
:
"
{workspace}/data/train"
data_path
:
"
{workspace}/data/train"
data_converter
:
"
{workspace}/census_reader.py"
data_converter
:
"
{workspace}/census_reader.py"
-
name
:
dataset_infer
-
name
:
dataset_infer
batch_size
:
5
batch_size
:
5
type
:
QueueDataset
type
:
DataLoader
# or
QueueDataset
data_path
:
"
{workspace}/data/train"
data_path
:
"
{workspace}/data/train"
data_converter
:
"
{workspace}/census_reader.py"
data_converter
:
"
{workspace}/census_reader.py"
...
@@ -37,7 +37,6 @@ hyper_parameters:
...
@@ -37,7 +37,6 @@ hyper_parameters:
learning_rate
:
0.001
learning_rate
:
0.001
strategy
:
async
strategy
:
async
#use infer_runner mode and modify 'phase' below if infer
mode
:
[
train_runner
,
infer_runner
]
mode
:
[
train_runner
,
infer_runner
]
runner
:
runner
:
...
@@ -49,10 +48,10 @@ runner:
...
@@ -49,10 +48,10 @@ runner:
save_inference_interval
:
4
save_inference_interval
:
4
save_checkpoint_path
:
"
increment"
save_checkpoint_path
:
"
increment"
save_inference_path
:
"
inference"
save_inference_path
:
"
inference"
print_interval
:
1
0
print_interval
:
1
-
name
:
infer_runner
-
name
:
infer_runner
class
:
infer
class
:
infer
init_model_path
:
"
increment/
0
"
init_model_path
:
"
increment/
1
"
device
:
cpu
device
:
cpu
phase
:
phase
:
...
...
models/rank/fibinet/config.yaml
浏览文件 @
641e55e8
...
@@ -102,9 +102,9 @@ phase:
...
@@ -102,9 +102,9 @@ phase:
-
name
:
phase1
-
name
:
phase1
model
:
"
{workspace}/model.py"
# user-defined model
model
:
"
{workspace}/model.py"
# user-defined model
dataset_name
:
dataloader_train
# select dataset by name
dataset_name
:
dataloader_train
# select dataset by name
thread_num
:
8
thread_num
:
1
-
name
:
phase2
-
name
:
phase2
model
:
"
{workspace}/model.py"
# user-defined model
model
:
"
{workspace}/model.py"
# user-defined model
dataset_name
:
dataset_infer
# select dataset by name
dataset_name
:
dataset_infer
# select dataset by name
thread_num
:
8
thread_num
:
1
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录