Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleClas
提交
1c54b6e4
P
PaddleClas
项目概览
PaddlePaddle
/
PaddleClas
大约 1 年 前同步成功
通知
115
Star
4999
Fork
1114
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
19
列表
看板
标记
里程碑
合并请求
6
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleClas
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
19
Issue
19
列表
看板
标记
里程碑
合并请求
6
合并请求
6
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
1c54b6e4
编写于
6月 02, 2022
作者:
H
HydrogenSulfate
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
debug
上级
25acd2ea
变更
18
隐藏空白更改
内联
并排
Showing
18 changed file
with
87 addition
and
40 deletion
+87
-40
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPHGNet/PPHGNet_small_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPHGNet/PPHGNet_tiny_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+14
-0
test_tipc/config/PPLCNet/PPLCNet_x0_25_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x0_35_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x0_5_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x0_75_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x1_0_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x1_5_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x2_0_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNet/PPLCNet_x2_5_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/PPLCNetV2/PPLCNetV2_base_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
..._linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
+1
-1
test_tipc/docs/test_serving_infer_python.md
test_tipc/docs/test_serving_infer_python.md
+10
-10
test_tipc/prepare.sh
test_tipc/prepare.sh
+13
-11
test_tipc/test_serving_infer.sh
test_tipc/test_serving_infer.sh
+23
-6
未找到文件。
test_tipc/config/MobileNetV3/MobileNetV3_large_x1_0_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/MobileNetV3_large_x1_0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/MobileNetV3_large_x1_0_serv
er
/
--serving_server:./deploy/paddleserving/MobileNetV3_large_x1_0_serv
ing
/
--serving_client:./deploy/paddleserving/MobileNetV3_large_x1_0_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPHGNet/PPHGNet_small_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1c54b6e4
===========================serving_params===========================
model_name:PPHGNet_small
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPHGNet_small_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPHGNet_small_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPHGNet_small_serving/
--serving_client:./deploy/paddleserving/PPHGNet_small_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/PPHGNet/PPHGNet_tiny_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
0 → 100644
浏览文件 @
1c54b6e4
===========================serving_params===========================
model_name:PPHGNet_tiny
python:python3.7
inference_model_url:https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPHGNet_tiny_infer.tar
trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPHGNet_tiny_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPHGNet_tiny_serving/
--serving_client:./deploy/paddleserving/PPHGNet_tiny_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
--use_gpu:0|null
pipline:pipeline_http_client.py
test_tipc/config/PPLCNet/PPLCNet_x0_25_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x0_25_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x0_25_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x0_25_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x0_25_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x0_35_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x0_35_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x0_35_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x0_35_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x0_35_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x0_5_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x0_5_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x0_5_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x0_5_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x0_5_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x0_75_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x0_75_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x0_75_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x0_75_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x0_75_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x1_0_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x1_0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x1_0_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x1_0_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x1_0_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x1_5_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x1_5_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x1_5_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x1_5_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x1_5_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x2_0_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x2_0_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x2_0_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x2_0_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x2_0_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNet/PPLCNet_x2_5_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNet_x2_5_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNet_x2_5_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNet_x2_5_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNet_x2_5_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/PPLCNetV2/PPLCNetV2_base_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/PPLCNetV2_base_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/PPLCNetV2_base_serv
er
/
--serving_server:./deploy/paddleserving/PPLCNetV2_base_serv
ing
/
--serving_client:./deploy/paddleserving/PPLCNetV2_base_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/ResNet/ResNet50_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/ResNet50_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/ResNet50_serv
er
/
--serving_server:./deploy/paddleserving/ResNet50_serv
ing
/
--serving_client:./deploy/paddleserving/ResNet50_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/ResNet/ResNet50_vd_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/ResNet50_vd_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/ResNet50_vd_serv
er
/
--serving_server:./deploy/paddleserving/ResNet50_vd_serv
ing
/
--serving_client:./deploy/paddleserving/ResNet50_vd_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/config/SwinTransformer/SwinTransformer_tiny_patch4_window7_224_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt
浏览文件 @
1c54b6e4
...
...
@@ -6,7 +6,7 @@ trans_model:-m paddle_serving_client.convert
--dirname:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_infer/
--model_filename:inference.pdmodel
--params_filename:inference.pdiparams
--serving_server:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_serv
er
/
--serving_server:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_serv
ing
/
--serving_client:./deploy/paddleserving/SwinTransformer_tiny_patch4_window7_224_client/
serving_dir:./deploy/paddleserving
web_service:classification_web_service.py
...
...
test_tipc/docs/test_serving_infer_python.md
浏览文件 @
1c54b6e4
...
...
@@ -13,15 +13,15 @@ Linux GPU/CPU PYTHON 服务化部署测试的主程序为`test_serving_infer.sh
| PP-ShiTu | PPShiTu_general_rec、PPShiTu_mainbody_det | 支持 | 支持 |
| PPHGNet | PPHGNet_small | 支持 | 支持 |
| PPHGNet | PPHGNet_tiny | 支持 | 支持 |
| PPLCNet | PPLCNet_x0_25 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x0_35 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x0_5 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x0_75 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x1_0 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x1_5 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x2_0 |
不支持 | 不
支持 |
| PPLCNet | PPLCNet_x2_5 |
不支持 | 不
支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 |
不
支持 |
| PPLCNet | PPLCNet_x0_25 |
支持 |
支持 |
| PPLCNet | PPLCNet_x0_35 |
支持 |
支持 |
| PPLCNet | PPLCNet_x0_5 |
支持 |
支持 |
| PPLCNet | PPLCNet_x0_75 |
支持 |
支持 |
| PPLCNet | PPLCNet_x1_0 |
支持 |
支持 |
| PPLCNet | PPLCNet_x1_5 |
支持 |
支持 |
| PPLCNet | PPLCNet_x2_0 |
支持 |
支持 |
| PPLCNet | PPLCNet_x2_5 |
支持 |
支持 |
| PPLCNetV2 | PPLCNetV2_base | 支持 | 支持 |
| ResNet | ResNet50 | 支持 | 支持 |
| ResNet | ResNet50_vd | 支持 | 支持 |
| SwinTransformer | SwinTransformer_tiny_patch4_window7_224 | 支持 | 支持 |
...
...
@@ -50,7 +50,7 @@ Linux GPU/CPU PYTHON 服务化部署测试的主程序为`test_serving_infer.sh
```
shell
python3.7
-m
pip
install
-r
requirements.txt
```
-
安装 PaddleServing 相关组件,包括serving-server、serving_client、serving-app
-
安装 PaddleServing 相关组件,包括serving-server、serving_client、serving-app
,自动下载并解压推理模型
```
bash
bash test_tipc/prepare.sh test_tipc/configs/ResNet50/ResNet50_linux_gpu_normal_normal_serving_python_linux_gpu_cpu.txt serving_infer
```
...
...
test_tipc/prepare.sh
浏览文件 @
1c54b6e4
...
...
@@ -165,24 +165,26 @@ if [ ${MODE} = "serving_infer" ];then
${
python_name
}
-m
pip
install install
paddle-serving-server-gpu
==
0.7.0.post102
${
python_name
}
-m
pip
install
paddle_serving_client
==
0.7.0
${
python_name
}
-m
pip
install
paddle-serving-app
==
0.7.0
cls_inference_model_url
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
cls_tar_name
=
$(
func_get_url_file_name
"
${
cls_inference_model_url
}
"
)
det_inference_model_url
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
det_tar_name
=
$(
func_get_url_file_name
"
${
det_inference_model_url
}
"
)
unset
http_proxy
unset
https_proxy
if
[[
${
det_inference_model_url
}
-eq
null
]]
;
then
cd
./deploy/paddleserving
wget
-nc
${
cls_inference_model_url
}
&&
tar
xf
${
cls_tar_name
}
cd
../../
else
if
[[
${
model_name
}
=
~
"ShiTu"
]]
;
then
cls_inference_model_url
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
cls_tar_name
=
$(
func_get_url_file_name
"
${
cls_inference_model_url
}
"
)
det_inference_model_url
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
det_tar_name
=
$(
func_get_url_file_name
"
${
det_inference_model_url
}
"
)
cd
./deploy
mkdir
models
cd
models
wget
-nc
${
cls_inference_model_url
}
&&
tar
xf
${
cls_tar_name
}
wget
-nc
${
det_inference_model_url
}
&&
tar
xf
${
det_tar_name
}
cd
..
else
cls_inference_model_url
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
cls_tar_name
=
$(
func_get_url_file_name
"
${
cls_inference_model_url
}
"
)
cd
./deploy/paddleserving
wget
-nc
${
cls_inference_model_url
}
&&
tar
xf
${
cls_tar_name
}
cd
../../
fi
unset
http_proxy
unset
https_proxy
fi
if
[
${
MODE
}
=
"paddle2onnx_infer"
]
;
then
...
...
test_tipc/test_serving_infer.sh
浏览文件 @
1c54b6e4
...
...
@@ -54,15 +54,25 @@ function func_serving_cls(){
eval
$trans_model_cmd
# modify the alias_name of fetch_var to "outputs"
server_fetch_var_line_cmd
=
"sed -i '/fetch_var/,/is_lod_tensor/s/alias_name: .*/alias_name:
\"
prediction
\"
/'
$
serving_server_value
/serving_server_conf.prototxt"
server_fetch_var_line_cmd
=
"sed -i '/fetch_var/,/is_lod_tensor/s/alias_name: .*/alias_name:
\"
prediction
\"
/'
$
{
serving_server_value
}
/serving_server_conf.prototxt"
eval
${
server_fetch_var_line_cmd
}
client_fetch_var_line_cmd
=
"sed -i '/fetch_var/,/is_lod_tensor/s/alias_name: .*/alias_name:
\"
prediction
\"
/'
$
serving_client_value
/serving_client_conf.prototxt"
client_fetch_var_line_cmd
=
"sed -i '/fetch_var/,/is_lod_tensor/s/alias_name: .*/alias_name:
\"
prediction
\"
/'
$
{
serving_client_value
}
/serving_client_conf.prototxt"
eval
${
client_fetch_var_line_cmd
}
prototxt_dataline
=
$(
awk
'NR==1, NR==3{print}'
${
serving_server_value
}
/serving_server_conf.prototxt
)
IFS
=
$'
\n
'
prototxt_lines
=(
${
prototxt_dataline
}
)
feed_var_name
=
$(
func_parser_value
"
${
prototxt_lines
[2]
}
"
)
IFS
=
'|'
cd
${
serving_dir_value
}
unset
https_proxy
unset
http_proxy
# modify the input_name in "classification_web_service.py" to be consistent with feed_var.name in prototxt
set_web_service_feet_var_cmd
=
"sed -i '/preprocess/,/input_imgs}/s/{.*: input_imgs}/{
${
feed_var_name
}
: input_imgs}/'
${
web_service_py
}
"
eval
${
set_web_service_feet_var_cmd
}
model_config
=
21
serving_server_dir_name
=
$(
func_get_url_file_name
"
$serving_server_value
"
)
set_model_config_cmd
=
"sed -i '
${
model_config
}
s/model_config: .*/model_config:
${
serving_server_dir_name
}
/' config.yml"
...
...
@@ -215,9 +225,20 @@ function func_serving_rec(){
client_fetch_var_line_cmd
=
"sed -i '/fetch_var/,/is_lod_tensor/s/alias_name: .*/alias_name:
\"
features
\"
/'
$cls_serving_client_value
/serving_client_conf.prototxt"
eval
${
client_fetch_var_line_cmd
}
prototxt_dataline
=
$(
awk
'NR==1, NR==3{print}'
${
cls_serving_server_value
}
/serving_server_conf.prototxt
)
IFS
=
$'
\n
'
prototxt_lines
=(
${
prototxt_dataline
}
)
feed_var_name
=
$(
func_parser_value
"
${
prototxt_lines
[2]
}
"
)
IFS
=
'|'
cd
${
serving_dir_value
}
unset
https_proxy
unset
http_proxy
# modify the input_name in "recognition_web_service.py" to be consistent with feed_var.name in prototxt
set_web_service_feet_var_cmd
=
"sed -i '/preprocess/,/input_imgs}/s/{.*: input_imgs}/{
${
feed_var_name
}
: input_imgs}/'
${
web_service_py
}
"
eval
${
set_web_service_feet_var_cmd
}
for
python
in
${
python
[*]
}
;
do
if
[[
${
python
}
=
"cpp"
]]
;
then
for
use_gpu
in
${
web_use_gpu_list
[*]
}
;
do
...
...
@@ -257,13 +278,11 @@ function func_serving_rec(){
eval
$set_devices_cmd
web_service_cmd
=
"
${
python
}
${
web_service_py
}
&"
echo
$PWD
-
$web_service_cmd
eval
$web_service_cmd
sleep
5s
for
pipeline
in
${
pipeline_py
[*]
}
;
do
_save_log_path
=
"
${
LOG_PATH
}
/server_infer_cpu_
${
pipeline
%_client*
}
_batchsize_1.log"
pipeline_cmd
=
"
${
python
}
${
pipeline
}
>
${
_save_log_path
}
2>&1 "
echo
$PWD
-
$pipeline_cmd
eval
$pipeline_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
...
...
@@ -291,13 +310,11 @@ function func_serving_rec(){
eval
$set_devices_cmd
web_service_cmd
=
"
${
python
}
${
web_service_py
}
& "
echo
$PWD
-
$web_service_cmd
eval
$web_service_cmd
sleep
10s
for
pipeline
in
${
pipeline_py
[*]
}
;
do
_save_log_path
=
"
${
LOG_PATH
}
/server_infer_gpu_
${
pipeline
%_client*
}
_batchsize_1.log"
pipeline_cmd
=
"
${
python
}
${
pipeline
}
>
${
_save_log_path
}
2>&1"
echo
$PWD
-
$pipeline_cmd
eval
$pipeline_cmd
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录