Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
9e8dd9f6
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
1 年多 前同步成功
通知
116
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
9e8dd9f6
编写于
7月 20, 2021
作者:
D
dongshuilong
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix bugs
上级
7138b30f
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
14 addition
and
12 deletion
+14
-12
deploy/python/predict_cls.py
deploy/python/predict_cls.py
+10
-7
deploy/test.sh
deploy/test.sh
+0
-1
deploy/utils/predictor.py
deploy/utils/predictor.py
+2
-2
test/test.sh
test/test.sh
+2
-2
未找到文件。
deploy/python/predict_cls.py
浏览文件 @
9e8dd9f6
...
...
@@ -48,18 +48,21 @@ class ClsPredictor(Predictor):
import
os
pid
=
os
.
getpid
()
self
.
auto_log
=
auto_log
.
AutoLogger
(
model_name
=
'cls'
,
model_name
=
config
[
"Global"
].
get
(
"model_name"
,
"cls"
)
,
model_precision
=
'fp16'
if
config
[
"Global"
][
"use_fp16"
]
else
'fp32'
,
batch_size
=
1
,
batch_size
=
config
[
"Global"
].
get
(
"batch_size"
,
1
)
,
data_shape
=
[
3
,
224
,
224
],
save_path
=
"../output/auto_log.lpg"
,
inference_config
=
None
,
save_path
=
config
[
"Global"
].
get
(
"save_log_path"
,
"./auto_log.log"
),
inference_config
=
self
.
config
,
pids
=
pid
,
process_name
=
None
,
gpu_ids
=
None
,
time_keys
=
[
'preprocess_time'
,
'inference_time'
],
warmup
=
10
)
time_keys
=
[
'preprocess_time'
,
'inference_time'
,
'postprocess_time'
],
warmup
=
2
)
def
predict
(
self
,
images
):
input_names
=
self
.
paddle_predictor
.
get_input_names
()
...
...
@@ -99,8 +102,8 @@ def main(config):
output
=
cls_predictor
.
postprocess
(
output
,
[
image_file
])
if
cls_predictor
.
benchmark
:
cls_predictor
.
auto_log
.
times
.
end
(
stamp
=
True
)
cls_predictor
.
auto_log
.
report
()
print
(
output
)
cls_predictor
.
auto_log
.
report
()
return
...
...
deploy/test.sh
已删除
100644 → 0
浏览文件 @
7138b30f
python3.7 python/predict_cls.py
-c
configs/inference_cls.yaml
-o
Global.use_gpu
=
True
-o
Global.use_tensorrt
=
False
-o
Global.use_fp16
=
False
-o
Global.inference_model_dir
=
.././test/output/ResNet50_vd_gpus_0,1/inference
-o
Global.batch_size
=
1
-o
Global.infer_imgs
=
.././dataset/chain_dataset/val
-o
Global.save_log_path
=
.././test/output/ResNet50_vd_infer_gpu_usetrt_True_precision_False_batchsize_1.log
-o
benchmark
=
True
deploy/utils/predictor.py
浏览文件 @
9e8dd9f6
...
...
@@ -28,7 +28,7 @@ class Predictor(object):
if
args
.
use_fp16
is
True
:
assert
args
.
use_tensorrt
is
True
self
.
args
=
args
self
.
paddle_predictor
=
self
.
create_paddle_predictor
(
self
.
paddle_predictor
,
self
.
config
=
self
.
create_paddle_predictor
(
args
,
inference_model_dir
)
def
predict
(
self
,
image
):
...
...
@@ -66,4 +66,4 @@ class Predictor(object):
config
.
switch_use_feed_fetch_ops
(
False
)
predictor
=
create_predictor
(
config
)
return
predictor
return
predictor
,
config
test/test.sh
浏览文件 @
9e8dd9f6
...
...
@@ -92,7 +92,7 @@ function func_inference(){
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/
${
_model_name
}
_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_batchsize_
${
batch_size
}
.log"
command
=
"
${
_python
}
${
_script
}
-o
${
use_gpu_key
}
=
${
use_gpu
}
-o
${
use_mkldnn_key
}
=
${
use_mkldnn
}
-o
${
cpu_threads_key
}
=
${
threads
}
-o
${
infer_model_key
}
=
${
_model_dir
}
-o
${
batch_size_key
}
=
${
batch_size
}
-o
${
image_dir_key
}
=
${
_img_dir
}
-o
${
save_log_key
}
=
${
_save_log_path
}
-o benchmark=True"
command
=
"
${
_python
}
${
_script
}
-o
${
use_gpu_key
}
=
${
use_gpu
}
-o
${
use_mkldnn_key
}
=
${
use_mkldnn
}
-o
${
cpu_threads_key
}
=
${
threads
}
-o
${
infer_model_key
}
=
${
_model_dir
}
-o
${
batch_size_key
}
=
${
batch_size
}
-o
${
image_dir_key
}
=
${
_img_dir
}
-o
${
save_log_key
}
=
${
_save_log_path
}
-o benchmark=True
-o Global.model_name=
${
_model_name
}
"
eval
$command
status_check
$?
"
${
command
}
"
"
${
status_log
}
"
done
...
...
@@ -106,7 +106,7 @@ function func_inference(){
fi
for
batch_size
in
${
batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/
${
_model_name
}
_infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
command
=
"
${
_python
}
${
_script
}
-o
${
use_gpu_key
}
=
${
use_gpu
}
-o
${
use_trt_key
}
=
${
use_trt
}
-o
${
precision_key
}
=
${
precision
}
-o
${
infer_model_key
}
=
${
_model_dir
}
-o
${
batch_size_key
}
=
${
batch_size
}
-o
${
image_dir_key
}
=
${
_img_dir
}
-o
${
save_log_key
}
=
${
_save_log_path
}
-o benchmark=True"
command
=
"
${
_python
}
${
_script
}
-o
${
use_gpu_key
}
=
${
use_gpu
}
-o
${
use_trt_key
}
=
${
use_trt
}
-o
${
precision_key
}
=
${
precision
}
-o
${
infer_model_key
}
=
${
_model_dir
}
-o
${
batch_size_key
}
=
${
batch_size
}
-o
${
image_dir_key
}
=
${
_img_dir
}
-o
${
save_log_key
}
=
${
_save_log_path
}
-o benchmark=True
-o Global.model_name=
${
_model_name
}
"
eval
$command
status_check
$?
"
${
command
}
"
"
${
status_log
}
"
done
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录