Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
1d9b6fe1
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
接近 2 年 前同步成功
通知
116
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
1d9b6fe1
编写于
6月 22, 2022
作者:
W
Walter
提交者:
GitHub
6月 22, 2022
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2091 from HydrogenSulfate/add_more_KL
add 5 KL model
上级
300e765c
f5517239
变更
28
隐藏空白更改
内联
并排
Showing
28 changed file
with
317 addition
and
71 deletion
+317
-71
deploy/slim/quant_post_static.py
deploy/slim/quant_post_static.py
+2
-0
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._0_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+0
-0
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+0
-0
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+0
-0
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...ec_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPHGNet/PPHGNet_small_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...ll_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/PPHGNet/PPHGNet_small_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPHGNet/PPHGNet_small_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPLCNet/PPLCNet_x1_0_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
..._0_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/PPLCNet/PPLCNet_x1_0_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPLCNet/PPLCNet_x1_0_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPLCNetV2/PPLCNetV2_base_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...se_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/PPLCNetV2/PPLCNetV2_base_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPLCNetV2/PPLCNetV2_base_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/ResNet/ResNet50_vd_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...vd_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+0
-0
test_tipc/config/ResNet/ResNet50_vd_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+0
-0
test_tipc/config/ResNet/ResNet50_vd_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+0
-0
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
...24_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
+18
-0
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
..._KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
+14
-0
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/docs/test_inference_cpp.md
test_tipc/docs/test_inference_cpp.md
+25
-21
test_tipc/docs/test_serving_infer_cpp.md
test_tipc/docs/test_serving_infer_cpp.md
+25
-20
test_tipc/docs/test_serving_infer_python.md
test_tipc/docs/test_serving_infer_python.md
+25
-20
test_tipc/prepare.sh
test_tipc/prepare.sh
+1
-1
test_tipc/test_inference_cpp.sh
test_tipc/test_inference_cpp.sh
+8
-8
test_tipc/test_serving_infer_python.sh
test_tipc/test_serving_infer_python.sh
+1
-1
未找到文件。
deploy/slim/quant_post_static.py
浏览文件 @
1d9b6fe1
...
...
@@ -41,6 +41,8 @@ def main():
'inference.pdmodel'
))
and
os
.
path
.
exists
(
os
.
path
.
join
(
config
[
"Global"
][
"save_inference_dir"
],
'inference.pdiparams'
))
if
"Query"
in
config
[
"DataLoader"
][
"Eval"
]:
config
[
"DataLoader"
][
"Eval"
]
=
config
[
"DataLoader"
][
"Eval"
][
"Query"
]
config
[
"DataLoader"
][
"Eval"
][
"sampler"
][
"batch_size"
]
=
1
config
[
"DataLoader"
][
"Eval"
][
"loader"
][
"num_workers"
]
=
0
...
...
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0
-
KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
→
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0
_
KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
1d9b6fe1
文件已移动
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0
-
KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
→
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0
_
KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
浏览文件 @
1d9b6fe1
文件已移动
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0
-
KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
→
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0
_
KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1d9b6fe1
文件已移动
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================cpp_infer_params===========================
model_name:GeneralRecognition_PPLCNet_x2_5_KL
cpp_infer_type:cls
cls_inference_model_dir:./general_PPLCNet_x2_5_lite_v1.0_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/general_PPLCNet_x2_5_lite_v1.0_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:GeneralRecognition_PPLCNet_x2_5_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/general_PPLCNet_x2_5_lite_v1.0_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/general_PPLCNet_x2_5_lite_v1.0_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/GeneralRecognition_PPLCNet_x2_5_kl_quant_serving/
--serving_client:./deploy/paddleserving/GeneralRecognition_PPLCNet_x2_5_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/PP-ShiTu/PPShiTu_general_rec_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:GeneralRecognition_PPLCNet_x2_5_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/general_PPLCNet_x2_5_lite_v1.0_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/general_PPLCNet_x2_5_lite_v1.0_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/GeneralRecognition_PPLCNet_x2_5_kl_quant_serving/
--serving_client:./deploy/paddleserving/GeneralRecognition_PPLCNet_x2_5_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/PPHGNet/PPHGNet_small_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================cpp_infer_params===========================
model_name:PPHGNet_small_KL
cpp_infer_type:cls
cls_inference_model_dir:./PPHGNet_small_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPHGNet_small_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPHGNet/PPHGNet_small_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:PPHGNet_small_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPHGNet_small_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPHGNet_small_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPHGNet_small_kl_quant_serving/
--serving_client:./deploy/paddleserving/PPHGNet_small_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/PPHGNet/PPHGNet_small_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:PPHGNet_small_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPHGNet_small_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPHGNet_small_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPHGNet_small_kl_quant_serving/
--serving_client:./deploy/paddleserving/PPHGNet_small_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/PPLCNet/PPLCNet_x1_0_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================cpp_infer_params===========================
model_name:PPLCNet_x1_0_KL
cpp_infer_type:cls
cls_inference_model_dir:./PPLCNet_x1_0_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPLCNet_x1_0_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNet/PPLCNet_x1_0_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:PPLCNet_x1_0_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPLCNet_x1_0_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x1_0_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x1_0_kl_quant_serving/
--serving_client:./deploy/paddleserving/PPLCNet_x1_0_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/PPLCNet/PPLCNet_x1_0_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:PPLCNet_x1_0_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPLCNet_x1_0_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x1_0_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x1_0_kl_quant_serving/
--serving_client:./deploy/paddleserving/PPLCNet_x1_0_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/PPLCNetV2/PPLCNetV2_base_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================cpp_infer_params===========================
model_name:PPLCNetV2_base_KL
cpp_infer_type:cls
cls_inference_model_dir:./PPLCNetV2_base_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPLCNetV2_base_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/PPLCNetV2/PPLCNetV2_base_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:PPLCNetV2_base_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPLCNetV2_base_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNetV2_base_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNetV2_base_kl_quant_serving/
--serving_client:./deploy/paddleserving/PPLCNetV2_base_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/PPLCNetV2/PPLCNetV2_base_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:PPLCNetV2_base_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/PPLCNetV2_base_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNetV2_base_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNetV2_base_kl_quant_serving/
--serving_client:./deploy/paddleserving/PPLCNetV2_base_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/ResNet/ResNet50_vd
-
KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
→
test_tipc/config/ResNet/ResNet50_vd
_
KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
浏览文件 @
1d9b6fe1
文件已移动
test_tipc/config/ResNet/ResNet50_vd
-
KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
→
test_tipc/config/ResNet/ResNet50_vd
_
KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
浏览文件 @
1d9b6fe1
文件已移动
test_tipc/config/ResNet/ResNet50_vd
-
KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
→
test_tipc/config/ResNet/ResNet50_vd
_
KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1d9b6fe1
文件已移动
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_KL_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================cpp_infer_params===========================
model_name:SwinTransformer_tiny_patch4_window7_224_KL
cpp_infer_type:cls
cls_inference_model_dir:./SwinTransformer_tiny_patch4_window7_224_kl_quant_infer/
det_inference_model_dir:
cls_inference_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/SwinTransformer_tiny_patch4_window7_224_kl_quant_infer.tar
det_inference_url:
infer_quant:False
inference_cmd:./deploy/cpp/build/clas_system -c inference_cls.yaml
use_gpu:True|False
enable_mkldnn:False
cpu_threads:1
batch_size:1
use_tensorrt:False
precision:fp32
image_dir:./dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
benchmark:False
generate_yaml_cmd:python3.7 test_tipc/generate_cpp_yaml.py
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_KL_linux_gpu_normal_normal_serving_cpp_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:SwinTransformer_tiny_patch4_window7_224_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/SwinTransformer_tiny_patch4_window7_224_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_kl_quant_serving/
--serving_client:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:null
--use_gpu:0|null
pipline:test_cpp_serving_client.py
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_KL_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1d9b6fe1
===========================serving_params===========================
model_name:SwinTransformer_tiny_patch4_window7_224_KL
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/slim_model/SwinTransformer_tiny_patch4_window7_224_kl_quant_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_kl_quant_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_kl_quant_serving/
--serving_client:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_kl_quant_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/docs/test_inference_cpp.md
浏览文件 @
1d9b6fe1
...
...
@@ -6,27 +6,31 @@ Linux GPU/CPU C++ 推理功能测试的主程序为`test_inference_cpp.sh`,可
-
推理相关:
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :---------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PP-ShiTu | PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_35 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_75 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_5 | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :----------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PP-ShiTu | GeneralRecognition_PPLCNet_x2_5_KL | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_small_KL | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_35 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_75 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0_KL | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_5 | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base_KL | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224_KL | 支持 | 支持 |
## 2. 测试流程(以**ResNet50**为例)
...
...
test_tipc/docs/test_serving_infer_cpp.md
浏览文件 @
1d9b6fe1
...
...
@@ -7,26 +7,31 @@ Linux GPU/CPU C++ 服务化部署测试的主程序为`test_serving_infer_cpp.sh
-
推理相关:
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :---------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_35 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_75 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_5 | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :----------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PP-ShiTu | GeneralRecognition_PPLCNet_x2_5_KL | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_small_KL | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_35 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_75 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0_KL | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_5 | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base_KL | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224_KL | 支持 | 支持 |
## 2. 测试流程
...
...
test_tipc/docs/test_serving_infer_python.md
浏览文件 @
1d9b6fe1
...
...
@@ -7,26 +7,31 @@ Linux GPU/CPU PYTHON 服务化部署测试的主程序为`test_serving_infer_pyt
-
推理相关:
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :---------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_35 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_75 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_5 | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
| 算法名称 | 模型名称 | device_CPU | device_GPU |
| :-------------: | :----------------------------------------: | :--------: | :--------: |
| MobileNetV3 | MobileNetV3_large_x1_0 | 支持 | 支持 |
| MobileNetV3 | MobileNetV3_large_x1_0_KL | 支持 | 支持 |
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PP-ShiTu | GeneralRecognition_PPLCNet_x2_5_KL | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_small_KL | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_35 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_75 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_0_KL | 支持 | 支持 |
| PPLCNet | PPLCNet_x1_5 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_0 | 支持 | 支持 |
| PPLCNet | PPLCNet_x2_5 | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| PPLCNetV2 | PPLCNetV2_base_KL | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| ResNet | ResNet50_vd_KL | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224_KL | 支持 | 支持 |
## 2. 测试流程
...
...
test_tipc/prepare.sh
浏览文件 @
1d9b6fe1
...
...
@@ -208,7 +208,7 @@ fi
if
[[
${
MODE
}
=
"serving_infer"
]]
;
then
# prepare serving env
python_name
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
if
[[
${
model_name
}
=
~
"
ShiTu"
]]
;
then
if
[[
${
model_name
}
=
"PP
ShiTu"
]]
;
then
cls_inference_model_url
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
cls_tar_name
=
$(
func_get_url_file_name
"
${
cls_inference_model_url
}
"
)
det_inference_model_url
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
...
...
test_tipc/test_inference_cpp.sh
浏览文件 @
1d9b6fe1
...
...
@@ -237,14 +237,14 @@ echo "################### build PaddleClas demo finished ###################"
# set cuda device
# GPUID=$2
#
if [ ${#GPUID} -le 0 ];then
#
env="export CUDA_VISIBLE_DEVICES=0"
#
else
#
env="export CUDA_VISIBLE_DEVICES=${GPUID}"
#
fi
#
set CUDA_VISIBLE_DEVICES
#
eval $env
GPUID
=
$3
if
[
${#
GPUID
}
-le
0
]
;
then
env
=
"export CUDA_VISIBLE_DEVICES=0"
else
env
=
"export CUDA_VISIBLE_DEVICES=
${
GPUID
}
"
fi
set
CUDA_VISIBLE_DEVICES
eval
$env
echo
"################### run test ###################"
...
...
test_tipc/test_serving_infer_python.sh
浏览文件 @
1d9b6fe1
...
...
@@ -310,7 +310,7 @@ echo "################### run test ###################"
export
Count
=
0
IFS
=
"|"
if
[[
${
model_name
}
=
~
"
ShiTu"
]]
;
then
if
[[
${
model_name
}
=
"PP
ShiTu"
]]
;
then
func_serving_rec
else
func_serving_cls
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录