Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
2f189285
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
2f189285
编写于
9月 29, 2021
作者:
D
dongshuilong
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
kl_quant for whole_chain and add readme
上级
890f43f0
变更
5
显示空白变更内容
内联
并排
Showing
5 changed file
with
89 addition
and
13 deletion
+89
-13
tests/README.md
tests/README.md
+70
-0
tests/config/MobileNetV3_large_x1_0.txt
tests/config/MobileNetV3_large_x1_0.txt
+1
-1
tests/config/ResNet50_vd.txt
tests/config/ResNet50_vd.txt
+1
-1
tests/prepare.sh
tests/prepare.sh
+1
-0
tests/test.sh
tests/test.sh
+16
-11
未找到文件。
tests/README.md
0 → 100644
浏览文件 @
2f189285
# 从训练到推理部署工具链测试方法介绍
test.sh和config文件夹下的txt文件配合使用,完成Clas模型从训练到预测的流程测试。
# 安装依赖
-
安装PaddlePaddle >= 2.0
-
安装PaddleClass依赖
```
pip3 install -r ../requirements.txt
```
-
安装autolog
```
git clone https://github.com/LDOUBLEV/AutoLog
cd AutoLog
pip3 install -r requirements.txt
python3 setup.py bdist_wheel
pip3 install ./dist/auto_log-1.0.0-py3-none-any.whl
cd ../
```
# 目录介绍
```
bash
tests/
├── config
# 测试模型的参数配置文件
| |---
*
.txt
└── prepare.sh
# 完成test.sh运行所需要的数据和模型下载
└── test.sh
# 测试主程序
```
# 使用方法
test.sh包四种运行模式,每种模式的运行数据不同,分别用于测试速度和精度,分别是:
-
模式1:lite_train_infer,使用少量数据训练,用于快速验证训练到预测的走通流程,不验证精度和速度;
```
shell
bash tests/prepare.sh ./tests/config/ResNet50_vd.txt
'lite_train_infer'
bash tests/test.sh ./tests/config/ResNet50_vd.txt
'lite_train_infer'
```
-
模式2:whole_infer,使用少量数据训练,一定量数据预测,用于验证训练后的模型执行预测,预测速度是否合理;
```
shell
bash tests/prepare.sh ./tests/config/ResNet50_vd.txt
'whole_infer'
bash tests/test.sh ./tests/config/ResNet50_vd.txt
'whole_infer'
```
-
模式3:infer 不训练,全量数据预测,走通开源模型评估、动转静,检查inference model预测时间和精度;
```
shell
bash tests/prepare.sh ./tests/config/ResNet50_vd.txt
'infer'
# 用法1:
bash tests/test.sh ./tests/config/ResNet50_vd.txt
'infer'
```
需注意的是,模型的离线量化需使用
`infer`
模式进行测试
-
模式4:whole_train_infer , CE: 全量数据训练,全量数据预测,验证模型训练精度,预测精度,预测速度;
```
shell
bash tests/prepare.sh ./tests/config/ResNet50_vd.txt
'whole_train_infer'
bash tests/test.sh ./tests/config/ResNet50_vd.txt
'whole_train_infer'
```
-
模式5:cpp_infer , CE: 验证inference model的c++预测是否走通;
```
shell
bash tests/prepare.sh ./tests/config/ResNet50_vd.txt
'cpp_infer'
bash tests/test.sh ./tests/config/ResNet50_vd.txt
'cpp_infer'
```
# 日志输出
最终在
```tests/output```
目录下生成.log后缀的日志文件
tests/config/MobileNetV3_large_x1_0.txt
浏览文件 @
2f189285
...
@@ -35,7 +35,7 @@ export1:null
...
@@ -35,7 +35,7 @@ export1:null
export2:null
export2:null
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/whole_chain/MobileNetV3_large_x1_0_inference.tar
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/whole_chain/MobileNetV3_large_x1_0_inference.tar
infer_model:../inference/
infer_model:../inference/
infer_export:null
kl_quant:deploy/slim/quant_post_static.py -c ppcls/configs/ImageNet/MobileNetV3/MobileNetV3_large_x1_0.yaml -o Global.save_inference_dir=./inference
infer_quant:Fasle
infer_quant:Fasle
inference:python/predict_cls.py -c configs/inference_cls.yaml
inference:python/predict_cls.py -c configs/inference_cls.yaml
-o Global.use_gpu:True|False
-o Global.use_gpu:True|False
...
...
tests/config/ResNet50_vd.txt
浏览文件 @
2f189285
...
@@ -35,7 +35,7 @@ export1:null
...
@@ -35,7 +35,7 @@ export1:null
export2:null
export2:null
infer_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/whole_chain/ResNet50_vd_inference.tar
infer_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/whole_chain/ResNet50_vd_inference.tar
infer_model:../inference/
infer_model:../inference/
infer_export:null
kl_quant:deploy/slim/quant_post_static.py -c ppcls/configs/ImageNet/ResNet/ResNet50_vd.yaml -o Global.save_inference_dir=./inference
infer_quant:Fasle
infer_quant:Fasle
inference:python/predict_cls.py -c configs/inference_cls.yaml
inference:python/predict_cls.py -c configs/inference_cls.yaml
-o Global.use_gpu:True|False
-o Global.use_gpu:True|False
...
...
tests/prepare.sh
浏览文件 @
2f189285
...
@@ -42,6 +42,7 @@ elif [ ${MODE} = "infer" ] || [ ${MODE} = "cpp_infer" ];then
...
@@ -42,6 +42,7 @@ elif [ ${MODE} = "infer" ] || [ ${MODE} = "cpp_infer" ];then
ln
-s
whole_chain_infer ILSVRC2012
ln
-s
whole_chain_infer ILSVRC2012
cd
ILSVRC2012
cd
ILSVRC2012
mv
val.txt val_list.txt
mv
val.txt val_list.txt
ln
-s
val_list.txt train_list.txt
cd
../../
cd
../../
# download inference model
# download inference model
eval
"wget -nc
$inference_model_url
"
eval
"wget -nc
$inference_model_url
"
...
...
tests/test.sh
浏览文件 @
2f189285
...
@@ -299,17 +299,6 @@ if [ ${MODE} = "infer" ]; then
...
@@ -299,17 +299,6 @@ if [ ${MODE} = "infer" ]; then
infer_quant_flag
=(
${
infer_is_quant
}
)
infer_quant_flag
=(
${
infer_is_quant
}
)
cd
deploy
cd
deploy
for
infer_model
in
${
infer_model_dir_list
[*]
}
;
do
for
infer_model
in
${
infer_model_dir_list
[*]
}
;
do
# run export
if
[
${
infer_run_exports
[Count]
}
!=
"null"
]
;
then
set_export_weight
=
$(
func_set_params
"
${
export_weight
}
"
"
${
infer_model
}
"
)
set_save_infer_key
=
$(
func_set_params
"
${
save_infer_key
}
"
"
${
infer_model
}
"
)
export_cmd
=
"
${
python
}
${
norm_export
}
${
set_export_weight
}
${
set_save_infer_key
}
"
eval
$export_cmd
status_export
=
$?
if
[
${
status_export
}
=
0
]
;
then
status_check
$status_export
"
${
export_cmd
}
"
"../
${
status_log
}
"
fi
fi
#run inference
#run inference
is_quant
=
${
infer_quant_flag
[Count]
}
is_quant
=
${
infer_quant_flag
[Count]
}
echo
"is_quant:
${
is_quant
}
"
echo
"is_quant:
${
is_quant
}
"
...
@@ -317,6 +306,22 @@ if [ ${MODE} = "infer" ]; then
...
@@ -317,6 +306,22 @@ if [ ${MODE} = "infer" ]; then
Count
=
$((
$Count
+
1
))
Count
=
$((
$Count
+
1
))
done
done
cd
..
cd
..
# for kl_quant
echo
"kl_quant"
if
[
${
infer_run_exports
}
]
;
then
command
=
"
${
python
}
${
infer_run_exports
}
"
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
cd
inference/quant_post_static_model
ln
-s
__model__ inference.pdmodel
ln
-s
__params__ inference.pdiparams
cd
../../deploy
is_quant
=
True
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
infer_model
}
/quant_post_static_model"
"../
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
cd
..
fi
elif
[
${
MODE
}
=
"cpp_infer"
]
;
then
elif
[
${
MODE
}
=
"cpp_infer"
]
;
then
cd
deploy
cd
deploy
func_cpp_inference
"./cpp/build/clas_system"
"../
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
func_cpp_inference
"./cpp/build/clas_system"
"../
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录