Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
5734d849
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
5734d849
编写于
6月 16, 2022
作者:
W
Walter
提交者:
GitHub
6月 16, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2072 from HydrogenSulfate/add_kl_quant
Add kl quant chain
上级
4ba0e4be
1143a4f2
变更
11
显示空白变更内容
内联
并排
Showing
11 changed file
with
127 addition
and
21 deletion
+127
-21
deploy/slim/quant_post_static.py
deploy/slim/quant_post_static.py
+2
-0
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0-KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._0-KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0-KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
...-KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0-KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/ResNet/ResNet50_vd-KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...vd-KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/ResNet/ResNet50_vd-KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
...-KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/ResNet/ResNet50_vd-KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/docs/test_inference_cpp.md
test_tipc/docs/test_inference_cpp.md
+23
-21
test_tipc/docs/test_serving_infer_cpp.md
test_tipc/docs/test_serving_infer_cpp.md
+2
-0
test_tipc/docs/test_serving_infer_python.md
test_tipc/docs/test_serving_infer_python.md
+2
-0
test_tipc/prepare.sh
test_tipc/prepare.sh
+6
-0
未找到文件。
deploy/slim/quant_post_static.py
浏览文件 @
5734d849
...
...
@@ -43,6 +43,7 @@ def main():
'inference.pdiparams'
))
config
[
"DataLoader"
][
"Eval"
][
"sampler"
][
"batch_size"
]
=
1
config
[
"DataLoader"
][
"Eval"
][
"loader"
][
"num_workers"
]
=
0
init_logger
()
device
=
paddle
.
set_device
(
"cpu"
)
train_dataloader
=
build_dataloader
(
config
[
"DataLoader"
],
"Eval"
,
device
,
...
...
@@ -67,6 +68,7 @@ def main():
quantize_model_path
=
os
.
path
.
join
(
config
[
"Global"
][
"save_inference_dir"
],
"quant_post_static_model"
),
sample_generator
=
sample_generator
(
train_dataloader
),
batch_size
=
config
[
"DataLoader"
][
"Eval"
][
"sampler"
][
"batch_size"
],
batch_nums
=
10
)
...
...
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0-KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
5734d849
===========================cpp_infer_params===========================
model_name:MobileNetV3_large_x1_0_KL
cpp_infer_type:cls
cls_inference_model_dir:./MobileNetV3_large_x1_0_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/MobileNetV3_large_x1_0_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0-KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
5734d849
===========================serving_params===========================
model_name:MobileNetV3_large_x1_0_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/MobileNetV3_large_x1_0_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/MobileNetV3_large_x1_0_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/MobileNetV3_large_x1_0_kl_quant_serving/
--serving_client:./deploy/paddleserving/MobileNetV3_large_x1_0_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0-KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
5734d849
===========================serving_params===========================
model_name:MobileNetV3_large_x1_0_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/MobileNetV3_large_x1_0_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/MobileNetV3_large_x1_0_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/MobileNetV3_large_x1_0_kl_quant_serving/
--serving_client:./deploy/paddleserving/MobileNetV3_large_x1_0_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/ResNet/ResNet50_vd-KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
5734d849
===========================cpp_infer_params===========================
model_name:ResNet50_vd_KL
cpp_infer_type:cls
cls_inference_model_dir:./ResNet50_vd_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/ResNet50_vd_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/ResNet/ResNet50_vd-KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
5734d849
===========================serving_params===========================
model_name:ResNet50_vd_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/ResNet50_vd_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/ResNet50_vd_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/ResNet50_vd_kl_quant_serving/
--serving_client:./deploy/paddleserving/ResNet50_vd_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/ResNet/ResNet50_vd-KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
5734d849
===========================serving_params===========================
model_name:ResNet50_vd_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/ResNet50_vd_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/ResNet50_vd_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/ResNet50_vd_kl_quant_serving/
--serving_client:./deploy/paddleserving/ResNet50_vd_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/docs/test_inference_cpp.md
浏览文件 @
5734d849
...
...
@@ -7,8 +7,9 @@ Linux GPU/CPU C++ 推理功能测试的主程序为`test_inference_cpp.sh`,可
-
推理相关:
| 算法名称 | 模型名称 | device_CPU | device_GPU |
|
:----: | :----: | :----: | :----:
|
|
:-------------: | :---------------------------------------: | :--------: | :--------:
|
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PP-ShiTu | PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
...
...
@@ -24,6 +25,7 @@ Linux GPU/CPU C++ 推理功能测试的主程序为`test_inference_cpp.sh`,可
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
## 2. 测试流程(以**ResNet50**为例)
...
...
@@ -167,11 +169,11 @@ build/paddle_inference_install_dir/
*
[
Paddle预测库官网
](
https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html
)
上提供了不同cuda版本的Linux预测库,可以在官网查看并选择合适的预测库版本。
以
`manylinux_cuda1
1.1_cudnn8.1_avx_mkl_trt7
_gcc8.2`
版本为例,使用下述命令下载并解压:
以
`manylinux_cuda1
0.1_cudnn7.6_avx_mkl_trt6
_gcc8.2`
版本为例,使用下述命令下载并解压:
```
shell
wget https://paddle-inference-lib.bj.bcebos.com/2.2.2/cxx_c/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda1
1.1_cudnn8.1.1_trt7.2.3.4
/paddle_inference.tgz
wget https://paddle-inference-lib.bj.bcebos.com/2.2.2/cxx_c/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda1
0.1_cudnn7.6.5_trt6.0.1.5
/paddle_inference.tgz
tar
-xvf
paddle_inference.tgz
```
...
...
test_tipc/docs/test_serving_infer_cpp.md
浏览文件 @
5734d849
...
...
@@ -10,6 +10,7 @@ Linux GPU/CPU C++ 服务化部署测试的主程序为`test_serving_infer_cpp.sh
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :---------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
...
...
@@ -24,6 +25,7 @@ Linux GPU/CPU C++ 服务化部署测试的主程序为`test_serving_infer_cpp.sh
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
...
...
test_tipc/docs/test_serving_infer_python.md
浏览文件 @
5734d849
...
...
@@ -10,6 +10,7 @@ Linux GPU/CPU PYTHON 服务化部署测试的主程序为`test_serving_infer_pyt
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :---------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
...
...
@@ -24,6 +25,7 @@ Linux GPU/CPU PYTHON 服务化部署测试的主程序为`test_serving_infer_pyt
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
...
...
test_tipc/prepare.sh
浏览文件 @
5734d849
...
...
@@ -83,6 +83,12 @@ if [[ ${MODE} = "cpp_infer" ]]; then
popd
echo
"################### build opencv finished ###################"
fi
if
[[
!
-d
"./deploy/cpp/paddle_inference/"
]]
;
then
pushd
./deploy/cpp/
wget
-nc
https://paddle-inference-lib.bj.bcebos.com/2.2.2/cxx_c/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda10.1_cudnn7.6.5_trt6.0.1.5/paddle_inference.tgz
tar
xf paddle_inference.tgz
popd
fi
if
[[
$FILENAME
==
*
infer_cpp_linux_gpu_cpu.txt
]]
;
then
cpp_type
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
cls_inference_model_dir
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录