Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
b29a0937
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
大约 1 年 前同步成功
通知
282
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
b29a0937
编写于
9月 19, 2019
作者:
Z
zhangxuefei
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix bug that use gpu in turn
上级
c40b9011
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
8 addition
and
13 deletion
+8
-13
demo/text-classification/predict.py
demo/text-classification/predict.py
+1
-7
paddlehub/autofinetune/autoft.py
paddlehub/autofinetune/autoft.py
+7
-6
未找到文件。
demo/text-classification/predict.py
浏览文件 @
b29a0937
...
@@ -148,13 +148,7 @@ if __name__ == '__main__':
...
@@ -148,13 +148,7 @@ if __name__ == '__main__':
]
]
if
args
.
use_taskid
:
if
args
.
use_taskid
:
feed_list
=
[
feed_list
.
append
(
inputs
[
"task_ids"
].
name
)
inputs
[
"input_ids"
].
name
,
inputs
[
"position_ids"
].
name
,
inputs
[
"segment_ids"
].
name
,
inputs
[
"input_mask"
].
name
,
inputs
[
"task_ids"
].
name
,
]
# Setup runing config for PaddleHub Finetune API
# Setup runing config for PaddleHub Finetune API
config
=
hub
.
RunConfig
(
config
=
hub
.
RunConfig
(
...
...
paddlehub/autofinetune/autoft.py
浏览文件 @
b29a0937
...
@@ -166,6 +166,7 @@ class BaseTuningStrategy(object):
...
@@ -166,6 +166,7 @@ class BaseTuningStrategy(object):
cnt
=
0
cnt
=
0
solutions_ckptdirs
=
{}
solutions_ckptdirs
=
{}
mkdir
(
output_dir
)
mkdir
(
output_dir
)
for
idx
,
solution
in
enumerate
(
solutions
):
for
idx
,
solution
in
enumerate
(
solutions
):
cuda
=
self
.
is_cuda_free
[
"free"
][
0
]
cuda
=
self
.
is_cuda_free
[
"free"
][
0
]
ckptdir
=
output_dir
+
"/ckpt-"
+
str
(
idx
)
ckptdir
=
output_dir
+
"/ckpt-"
+
str
(
idx
)
...
@@ -174,8 +175,8 @@ class BaseTuningStrategy(object):
...
@@ -174,8 +175,8 @@ class BaseTuningStrategy(object):
solutions_ckptdirs
[
tuple
(
solution
)]
=
ckptdir
solutions_ckptdirs
[
tuple
(
solution
)]
=
ckptdir
self
.
is_cuda_free
[
"free"
].
remove
(
cuda
)
self
.
is_cuda_free
[
"free"
].
remove
(
cuda
)
self
.
is_cuda_free
[
"busy"
].
append
(
cuda
)
self
.
is_cuda_free
[
"busy"
].
append
(
cuda
)
if
len
(
params_cudas_dirs
)
==
self
.
thread
or
cnt
==
int
(
if
len
(
params_cudas_dirs
self
.
popsize
/
self
.
thread
)
:
)
==
self
.
thread
or
idx
==
len
(
solutions
)
-
1
:
tp
=
ThreadPool
(
len
(
params_cudas_dirs
))
tp
=
ThreadPool
(
len
(
params_cudas_dirs
))
solution_results
+=
tp
.
map
(
self
.
evaluator
.
run
,
solution_results
+=
tp
.
map
(
self
.
evaluator
.
run
,
params_cudas_dirs
)
params_cudas_dirs
)
...
@@ -245,11 +246,11 @@ class HAZero(BaseTuningStrategy):
...
@@ -245,11 +246,11 @@ class HAZero(BaseTuningStrategy):
best_hparams
=
self
.
evaluator
.
convert_params
(
self
.
best_hparams_all_pop
)
best_hparams
=
self
.
evaluator
.
convert_params
(
self
.
best_hparams_all_pop
)
for
index
,
name
in
enumerate
(
self
.
hparams_name_list
):
for
index
,
name
in
enumerate
(
self
.
hparams_name_list
):
self
.
writer
.
add_scalar
(
self
.
writer
.
add_scalar
(
tag
=
"hyperparameter
tuning/"
+
name
,
tag
=
"hyperparameter
_
tuning/"
+
name
,
scalar_value
=
best_hparams
[
index
],
scalar_value
=
best_hparams
[
index
],
global_step
=
self
.
round
)
global_step
=
self
.
round
)
self
.
writer
.
add_scalar
(
self
.
writer
.
add_scalar
(
tag
=
"hyperparameter
tuning/best_eval_value"
,
tag
=
"hyperparameter
_
tuning/best_eval_value"
,
scalar_value
=
self
.
get_best_eval_value
(),
scalar_value
=
self
.
get_best_eval_value
(),
global_step
=
self
.
round
)
global_step
=
self
.
round
)
...
@@ -368,11 +369,11 @@ class PSHE2(BaseTuningStrategy):
...
@@ -368,11 +369,11 @@ class PSHE2(BaseTuningStrategy):
best_hparams
=
self
.
evaluator
.
convert_params
(
self
.
best_hparams_all_pop
)
best_hparams
=
self
.
evaluator
.
convert_params
(
self
.
best_hparams_all_pop
)
for
index
,
name
in
enumerate
(
self
.
hparams_name_list
):
for
index
,
name
in
enumerate
(
self
.
hparams_name_list
):
self
.
writer
.
add_scalar
(
self
.
writer
.
add_scalar
(
tag
=
"hyperparameter
tuning/"
+
name
,
tag
=
"hyperparameter
_
tuning/"
+
name
,
scalar_value
=
best_hparams
[
index
],
scalar_value
=
best_hparams
[
index
],
global_step
=
self
.
round
)
global_step
=
self
.
round
)
self
.
writer
.
add_scalar
(
self
.
writer
.
add_scalar
(
tag
=
"hyperparameter
tuning/best_eval_value"
,
tag
=
"hyperparameter
_
tuning/best_eval_value"
,
scalar_value
=
self
.
get_best_eval_value
(),
scalar_value
=
self
.
get_best_eval_value
(),
global_step
=
self
.
round
)
global_step
=
self
.
round
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录