Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Serving
提交
e1c47de4
S
Serving
项目概览
PaddlePaddle
/
Serving
大约 1 年 前同步成功
通知
186
Star
833
Fork
253
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
105
列表
看板
标记
里程碑
合并请求
10
Wiki
2
Wiki
分析
仓库
DevOps
项目成员
Pages
S
Serving
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
105
Issue
105
列表
看板
标记
里程碑
合并请求
10
合并请求
10
Pages
分析
分析
仓库分析
DevOps
Wiki
2
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
e1c47de4
编写于
3月 11, 2021
作者:
Z
zhangjun
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
rename paddle_serving_server_gpu
上级
98ab4b0d
变更
60
展开全部
隐藏空白更改
内联
并排
Showing
60 changed file
with
873 addition
and
786 deletion
+873
-786
python/examples/bert/README.md
python/examples/bert/README.md
+1
-1
python/examples/bert/README_CN.md
python/examples/bert/README_CN.md
+1
-1
python/examples/bert/benchmark.sh
python/examples/bert/benchmark.sh
+1
-1
python/examples/bert/benchmark_with_profile.sh
python/examples/bert/benchmark_with_profile.sh
+1
-1
python/examples/bert/bert_gpu_server.py
python/examples/bert/bert_gpu_server.py
+3
-3
python/examples/bert/bert_web_service_gpu.py
python/examples/bert/bert_web_service_gpu.py
+1
-1
python/examples/cascade_rcnn/README.md
python/examples/cascade_rcnn/README.md
+1
-1
python/examples/cascade_rcnn/README_CN.md
python/examples/cascade_rcnn/README_CN.md
+1
-1
python/examples/criteo_ctr/README.md
python/examples/criteo_ctr/README.md
+1
-1
python/examples/criteo_ctr/README_CN.md
python/examples/criteo_ctr/README_CN.md
+1
-1
python/examples/deeplabv3/README.md
python/examples/deeplabv3/README.md
+1
-1
python/examples/deeplabv3/README_CN.md
python/examples/deeplabv3/README_CN.md
+1
-1
python/examples/detection/faster_rcnn_r50_fpn_1x_coco/README.md
.../examples/detection/faster_rcnn_r50_fpn_1x_coco/README.md
+1
-1
python/examples/detection/faster_rcnn_r50_fpn_1x_coco/README_CN.md
...amples/detection/faster_rcnn_r50_fpn_1x_coco/README_CN.md
+1
-1
python/examples/detection/ppyolo_r50vd_dcn_1x_coco/README.md
python/examples/detection/ppyolo_r50vd_dcn_1x_coco/README.md
+1
-1
python/examples/detection/ppyolo_r50vd_dcn_1x_coco/README_CN.md
.../examples/detection/ppyolo_r50vd_dcn_1x_coco/README_CN.md
+1
-1
python/examples/detection/ttfnet_darknet53_1x_coco/README.md
python/examples/detection/ttfnet_darknet53_1x_coco/README.md
+1
-1
python/examples/detection/ttfnet_darknet53_1x_coco/README_CN.md
.../examples/detection/ttfnet_darknet53_1x_coco/README_CN.md
+1
-1
python/examples/detection/yolov3_darknet53_270e_coco/README.md
...n/examples/detection/yolov3_darknet53_270e_coco/README.md
+1
-1
python/examples/detection/yolov3_darknet53_270e_coco/README_CN.md
...xamples/detection/yolov3_darknet53_270e_coco/README_CN.md
+1
-1
python/examples/encryption/README.md
python/examples/encryption/README.md
+1
-1
python/examples/encryption/README_CN.md
python/examples/encryption/README_CN.md
+1
-1
python/examples/grpc_impl_example/fit_a_line/test_server_gpu.py
.../examples/grpc_impl_example/fit_a_line/test_server_gpu.py
+3
-3
python/examples/grpc_impl_example/yolov4/README.md
python/examples/grpc_impl_example/yolov4/README.md
+1
-1
python/examples/grpc_impl_example/yolov4/README_CN.md
python/examples/grpc_impl_example/yolov4/README_CN.md
+1
-1
python/examples/imagenet/README.md
python/examples/imagenet/README.md
+1
-1
python/examples/imagenet/README_CN.md
python/examples/imagenet/README_CN.md
+1
-1
python/examples/imagenet/benchmark.sh
python/examples/imagenet/benchmark.sh
+1
-1
python/examples/imagenet/resnet50_web_service.py
python/examples/imagenet/resnet50_web_service.py
+1
-1
python/examples/mobilenet/README.md
python/examples/mobilenet/README.md
+1
-1
python/examples/mobilenet/README_CN.md
python/examples/mobilenet/README_CN.md
+1
-1
python/examples/ocr/README.md
python/examples/ocr/README.md
+1
-1
python/examples/ocr/README_CN.md
python/examples/ocr/README_CN.md
+1
-1
python/examples/ocr/det_debugger_server.py
python/examples/ocr/det_debugger_server.py
+1
-1
python/examples/ocr/det_web_server.py
python/examples/ocr/det_web_server.py
+1
-1
python/examples/ocr/ocr_debugger_server.py
python/examples/ocr/ocr_debugger_server.py
+1
-1
python/examples/ocr/ocr_web_server.py
python/examples/ocr/ocr_web_server.py
+1
-1
python/examples/ocr/rec_debugger_server.py
python/examples/ocr/rec_debugger_server.py
+1
-1
python/examples/ocr/rec_web_server.py
python/examples/ocr/rec_web_server.py
+1
-1
python/examples/pipeline/imagenet/pipeline_rpc_client.py
python/examples/pipeline/imagenet/pipeline_rpc_client.py
+1
-1
python/examples/pipeline/imagenet/resnet50_web_service.py
python/examples/pipeline/imagenet/resnet50_web_service.py
+1
-1
python/examples/pipeline/imdb_model_ensemble/test_pipeline_server.py
...ples/pipeline/imdb_model_ensemble/test_pipeline_server.py
+1
-1
python/examples/pipeline/ocr/pipeline_rpc_client.py
python/examples/pipeline/ocr/pipeline_rpc_client.py
+1
-1
python/examples/pipeline/ocr/web_service.py
python/examples/pipeline/ocr/web_service.py
+1
-1
python/examples/pipeline/simple_web_service/web_service.py
python/examples/pipeline/simple_web_service/web_service.py
+1
-1
python/examples/resnet_v2_50/README.md
python/examples/resnet_v2_50/README.md
+1
-1
python/examples/resnet_v2_50/README_CN.md
python/examples/resnet_v2_50/README_CN.md
+1
-1
python/examples/unet_for_image_seg/README.md
python/examples/unet_for_image_seg/README.md
+1
-1
python/examples/unet_for_image_seg/README_CN.md
python/examples/unet_for_image_seg/README_CN.md
+1
-1
python/examples/xpu/fit_a_line_xpu/README.md
python/examples/xpu/fit_a_line_xpu/README.md
+1
-1
python/examples/xpu/fit_a_line_xpu/test_server.py
python/examples/xpu/fit_a_line_xpu/test_server.py
+1
-1
python/examples/xpu/resnet_v2_50_xpu/README.md
python/examples/xpu/resnet_v2_50_xpu/README.md
+1
-1
python/examples/xpu/resnet_v2_50_xpu/README_CN.md
python/examples/xpu/resnet_v2_50_xpu/README_CN.md
+1
-1
python/examples/yolov4/README.md
python/examples/yolov4/README.md
+1
-1
python/examples/yolov4/README_CN.md
python/examples/yolov4/README_CN.md
+1
-1
python/paddle_serving_client/__init__.py
python/paddle_serving_client/__init__.py
+22
-700
python/paddle_serving_client/client.py
python/paddle_serving_client/client.py
+715
-0
python/paddle_serving_server/serve.py
python/paddle_serving_server/serve.py
+53
-3
python/pipeline/local_service_handler.py
python/pipeline/local_service_handler.py
+3
-3
python/setup.py.server_gpu.in
python/setup.py.server_gpu.in
+21
-21
未找到文件。
python/examples/bert/README.md
浏览文件 @
e1c47de4
...
...
@@ -48,7 +48,7 @@ python -m paddle_serving_server.serve --model bert_seq128_model/ --port 9292 #c
```
Or,start gpu inference service,Run
```
python -m paddle_serving_server
_gpu
.serve --model bert_seq128_model/ --port 9292 --gpu_ids 0 #launch gpu inference service at GPU 0
python -m paddle_serving_server.serve --model bert_seq128_model/ --port 9292 --gpu_ids 0 #launch gpu inference service at GPU 0
```
### RPC Inference
...
...
python/examples/bert/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -45,7 +45,7 @@ python -m paddle_serving_server.serve --model bert_seq128_model/ --port 9292 #
```
或者,启动gpu预测服务,执行
```
python -m paddle_serving_server
_gpu
.serve --model bert_seq128_model/ --port 9292 --gpu_ids 0 #在gpu 0上启动gpu预测服务
python -m paddle_serving_server.serve --model bert_seq128_model/ --port 9292 --gpu_ids 0 #在gpu 0上启动gpu预测服务
```
...
...
python/examples/bert/benchmark.sh
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ else
mkdir
utilization
fi
#start server
$PYTHONROOT
/bin/python3
-m
paddle_serving_server
_gpu
.serve
--model
$1
--port
9292
--thread
4
--gpu_ids
0,1,2,3
--mem_optim
--ir_optim
>
elog 2>&1 &
$PYTHONROOT
/bin/python3
-m
paddle_serving_server.serve
--model
$1
--port
9292
--thread
4
--gpu_ids
0,1,2,3
--mem_optim
--ir_optim
>
elog 2>&1 &
sleep
5
#warm up
...
...
python/examples/bert/benchmark_with_profile.sh
浏览文件 @
e1c47de4
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3
python
-m
paddle_serving_server
_gpu
.serve
--model
bert_seq20_model/
--port
9295
--thread
4
--gpu_ids
0,1,2,3 2> elog
>
stdlog &
python
-m
paddle_serving_server.serve
--model
bert_seq20_model/
--port
9295
--thread
4
--gpu_ids
0,1,2,3 2> elog
>
stdlog &
export
FLAGS_profile_client
=
1
export
FLAGS_profile_server
=
1
sleep
5
...
...
python/examples/bert/bert_gpu_server.py
浏览文件 @
e1c47de4
...
...
@@ -14,9 +14,9 @@
import
os
import
sys
from
paddle_serving_server
_gpu
import
OpMaker
from
paddle_serving_server
_gpu
import
OpSeqMaker
from
paddle_serving_server
_gpu
import
Server
from
paddle_serving_server
import
OpMaker
from
paddle_serving_server
import
OpSeqMaker
from
paddle_serving_server
import
Server
op_maker
=
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
...
...
python/examples/bert/bert_web_service_gpu.py
浏览文件 @
e1c47de4
...
...
@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
from
paddle_serving_app.reader
import
ChineseBertReader
import
sys
import
os
...
...
python/examples/cascade_rcnn/README.md
浏览文件 @
e1c47de4
...
...
@@ -10,7 +10,7 @@ If you want to have more detection models, please refer to [Paddle Detection Mod
### Start the service
```
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9292 --gpu_id 0
python -m paddle_serving_server.serve --model serving_server --port 9292 --gpu_id 0
```
### Perform prediction
...
...
python/examples/cascade_rcnn/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -10,7 +10,7 @@ sh get_data.sh
### 启动服务
```
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9292 --gpu_id 0
python -m paddle_serving_server.serve --model serving_server --port 9292 --gpu_id 0
```
### 执行预测
...
...
python/examples/criteo_ctr/README.md
浏览文件 @
e1c47de4
...
...
@@ -20,7 +20,7 @@ the directories like `ctr_serving_model` and `ctr_client_conf` will appear.
```
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 #CPU RPC Service
python -m paddle_serving_server
_gpu
.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #RPC Service on GPU 0
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #RPC Service on GPU 0
```
### RPC Infer
...
...
python/examples/criteo_ctr/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -20,7 +20,7 @@ mv models/ctr_serving_model .
```
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 #启动CPU预测服务
python -m paddle_serving_server
_gpu
.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #在GPU 0上启动预测服务
python -m paddle_serving_server.serve --model ctr_serving_model/ --port 9292 --gpu_ids 0 #在GPU 0上启动预测服务
```
### 执行预测
...
...
python/examples/deeplabv3/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf deeplabv3.tar.gz
### Start Service
```
python -m paddle_serving_server
_gpu
.serve --model deeplabv3_server --gpu_ids 0 --port 9494
python -m paddle_serving_server.serve --model deeplabv3_server --gpu_ids 0 --port 9494
```
### Client Prediction
...
...
python/examples/deeplabv3/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf deeplabv3.tar.gz
### 启动服务端
```
python -m paddle_serving_server
_gpu
.serve --model deeplabv3_server --gpu_ids 0 --port 9494
python -m paddle_serving_server.serve --model deeplabv3_server --gpu_ids 0 --port 9494
```
### 客户端预测
...
...
python/examples/detection/faster_rcnn_r50_fpn_1x_coco/README.md
浏览文件 @
e1c47de4
...
...
@@ -10,7 +10,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### Start the service
```
tar xf faster_rcnn_r50_fpn_1x_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
This model support TensorRT, if you want a faster inference, please use
`--use_trt`
.
...
...
python/examples/detection/faster_rcnn_r50_fpn_1x_coco/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -11,7 +11,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### 启动服务
```
tar xf faster_rcnn_r50_fpn_1x_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
该模型支持TensorRT,如果想要更快的预测速度,可以开启
`--use_trt`
选项。
...
...
python/examples/detection/ppyolo_r50vd_dcn_1x_coco/README.md
浏览文件 @
e1c47de4
...
...
@@ -10,7 +10,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### Start the service
```
tar xf ppyolo_r50vd_dcn_1x_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
This model support TensorRT, if you want a faster inference, please use
`--use_trt`
.
...
...
python/examples/detection/ppyolo_r50vd_dcn_1x_coco/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -11,7 +11,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### 启动服务
```
tar xf ppyolo_r50vd_dcn_1x_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
该模型支持TensorRT,如果想要更快的预测速度,可以开启
`--use_trt`
选项。
...
...
python/examples/detection/ttfnet_darknet53_1x_coco/README.md
浏览文件 @
e1c47de4
...
...
@@ -10,7 +10,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### Start the service
```
tar xf ttfnet_darknet53_1x_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
This model support TensorRT, if you want a faster inference, please use
`--use_trt`
.
...
...
python/examples/detection/ttfnet_darknet53_1x_coco/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -11,7 +11,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### 启动服务
```
tar xf ttfnet_darknet53_1x_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
该模型支持TensorRT,如果想要更快的预测速度,可以开启
`--use_trt`
选项。
...
...
python/examples/detection/yolov3_darknet53_270e_coco/README.md
浏览文件 @
e1c47de4
...
...
@@ -10,7 +10,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### Start the service
```
tar xf yolov3_darknet53_270e_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
This model support TensorRT, if you want a faster inference, please use
`--use_trt`
.
...
...
python/examples/detection/yolov3_darknet53_270e_coco/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -11,7 +11,7 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/pddet_demo/2.0/
### 启动服务
```
tar xf yolov3_darknet53_270e_coco.tar
python -m paddle_serving_server
_gpu
.serve --model serving_server --port 9494 --gpu_ids 0
python -m paddle_serving_server.serve --model serving_server --port 9494 --gpu_ids 0
```
该模型支持TensorRT,如果想要更快的预测速度,可以开启
`--use_trt`
选项。
...
...
python/examples/encryption/README.md
浏览文件 @
e1c47de4
...
...
@@ -26,7 +26,7 @@ python -m paddle_serving_server.serve --model encrypt_server/ --port 9300 --use_
```
GPU Service
```
python -m paddle_serving_server
_gpu
.serve --model encrypt_server/ --port 9300 --use_encryption_model --gpu_ids 0
python -m paddle_serving_server.serve --model encrypt_server/ --port 9300 --use_encryption_model --gpu_ids 0
```
## Prediction
...
...
python/examples/encryption/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -24,7 +24,7 @@ python -m paddle_serving_server.serve --model encrypt_server/ --port 9300 --use_
```
GPU预测服务
```
python -m paddle_serving_server
_gpu
.serve --model encrypt_server/ --port 9300 --use_encryption_model --gpu_ids 0
python -m paddle_serving_server.serve --model encrypt_server/ --port 9300 --use_encryption_model --gpu_ids 0
```
## 预测
...
...
python/examples/grpc_impl_example/fit_a_line/test_server_gpu.py
浏览文件 @
e1c47de4
...
...
@@ -15,9 +15,9 @@
import
os
import
sys
from
paddle_serving_server
_gpu
import
OpMaker
from
paddle_serving_server
_gpu
import
OpSeqMaker
from
paddle_serving_server
_gpu
import
MultiLangServer
as
Server
from
paddle_serving_server
import
OpMaker
from
paddle_serving_server
import
OpSeqMaker
from
paddle_serving_server
import
MultiLangServer
as
Server
op_maker
=
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
...
...
python/examples/grpc_impl_example/yolov4/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf yolov4.tar.gz
## Start RPC Service
```
python -m paddle_serving_server
_gpu
.serve --model yolov4_model --port 9393 --gpu_ids 0 --use_multilang
python -m paddle_serving_server.serve --model yolov4_model --port 9393 --gpu_ids 0 --use_multilang
```
## Prediction
...
...
python/examples/grpc_impl_example/yolov4/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf yolov4.tar.gz
## 启动RPC服务
```
python -m paddle_serving_server
_gpu
.serve --model yolov4_model --port 9393 --gpu_ids 0 --use_multilang
python -m paddle_serving_server.serve --model yolov4_model --port 9393 --gpu_ids 0 --use_multilang
```
## 预测
...
...
python/examples/imagenet/README.md
浏览文件 @
e1c47de4
...
...
@@ -39,7 +39,7 @@ python -m paddle_serving_server.serve --model ResNet50_vd_model --port 9696 #cpu
```
```
python -m paddle_serving_server
_gpu
.serve --model ResNet50_vd_model --port 9696 --gpu_ids 0 #gpu inference service
python -m paddle_serving_server.serve --model ResNet50_vd_model --port 9696 --gpu_ids 0 #gpu inference service
```
client send inference request
...
...
python/examples/imagenet/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -39,7 +39,7 @@ python -m paddle_serving_server.serve --model ResNet50_vd_model --port 9696 #cpu
```
```
python -m paddle_serving_server
_gpu
.serve --model ResNet50_vd_model --port 9696 --gpu_ids 0 #gpu预测服务
python -m paddle_serving_server.serve --model ResNet50_vd_model --port 9696 --gpu_ids 0 #gpu预测服务
```
client端进行预测
...
...
python/examples/imagenet/benchmark.sh
浏览文件 @
e1c47de4
...
...
@@ -2,7 +2,7 @@ rm profile_log*
export
CUDA_VISIBLE_DEVICES
=
0,1,2,3
export
FLAGS_profile_server
=
1
export
FLAGS_profile_client
=
1
python
-m
paddle_serving_server
_gpu
.serve
--model
$1
--port
9292
--thread
4
--gpu_ids
0,1,2,3
--mem_optim
--ir_optim
2> elog
>
stdlog &
python
-m
paddle_serving_server.serve
--model
$1
--port
9292
--thread
4
--gpu_ids
0,1,2,3
--mem_optim
--ir_optim
2> elog
>
stdlog &
sleep
5
gpu_id
=
0
...
...
python/examples/imagenet/resnet50_web_service.py
浏览文件 @
e1c47de4
...
...
@@ -25,7 +25,7 @@ device = sys.argv[2]
if
device
==
"cpu"
:
from
paddle_serving_server.web_service
import
WebService
else
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
class
ImageService
(
WebService
):
...
...
python/examples/mobilenet/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf mobilenet_v2_imagenet.tar.gz
### Start Service
```
python -m paddle_serving_server
_gpu
.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 9393
python -m paddle_serving_server.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 9393
```
### Client Prediction
...
...
python/examples/mobilenet/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf mobilenet_v2_imagenet.tar.gz
### 启动服务端
```
python -m paddle_serving_server
_gpu
.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 9393
python -m paddle_serving_server.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 9393
```
### 客户端预测
...
...
python/examples/ocr/README.md
浏览文件 @
e1c47de4
...
...
@@ -26,7 +26,7 @@ tar xf test_imgs.tar
python -m paddle_serving_server.serve --model ocr_det_model --port 9293
python ocr_web_server.py cpu
#for gpu user
python -m paddle_serving_server
_gpu
.serve --model ocr_det_model --port 9293 --gpu_id 0
python -m paddle_serving_server.serve --model ocr_det_model --port 9293 --gpu_id 0
python ocr_web_server.py gpu
```
...
...
python/examples/ocr/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -25,7 +25,7 @@ tar xf test_imgs.tar
python -m paddle_serving_server.serve --model ocr_det_model --port 9293
python ocr_web_server.py cpu
#for gpu user
python -m paddle_serving_server
_gpu
.serve --model ocr_det_model --port 9293 --gpu_id 0
python -m paddle_serving_server.serve --model ocr_det_model --port 9293 --gpu_id 0
python ocr_web_server.py gpu
```
...
...
python/examples/ocr/det_debugger_server.py
浏览文件 @
e1c47de4
...
...
@@ -22,7 +22,7 @@ from paddle_serving_app.reader import Sequential, ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
import
time
...
...
python/examples/ocr/det_web_server.py
浏览文件 @
e1c47de4
...
...
@@ -22,7 +22,7 @@ from paddle_serving_app.reader import Sequential, ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
import
time
...
...
python/examples/ocr/ocr_debugger_server.py
浏览文件 @
e1c47de4
...
...
@@ -23,7 +23,7 @@ from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
,
GetRotateCropImage
,
SortedBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
from
paddle_serving_app.local_predict
import
LocalPredictor
...
...
python/examples/ocr/ocr_web_server.py
浏览文件 @
e1c47de4
...
...
@@ -23,7 +23,7 @@ from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
,
GetRotateCropImage
,
SortedBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
import
time
...
...
python/examples/ocr/rec_debugger_server.py
浏览文件 @
e1c47de4
...
...
@@ -23,7 +23,7 @@ from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
,
GetRotateCropImage
,
SortedBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
import
time
...
...
python/examples/ocr/rec_web_server.py
浏览文件 @
e1c47de4
...
...
@@ -23,7 +23,7 @@ from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from
paddle_serving_app.reader
import
Div
,
Normalize
,
Transpose
from
paddle_serving_app.reader
import
DBPostProcess
,
FilterBoxes
,
GetRotateCropImage
,
SortedBoxes
if
sys
.
argv
[
1
]
==
'gpu'
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
elif
sys
.
argv
[
1
]
==
'cpu'
:
from
paddle_serving_server.web_service
import
WebService
import
time
...
...
python/examples/pipeline/imagenet/pipeline_rpc_client.py
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
try
:
from
paddle_serving_server
_gpu
.pipeline
import
PipelineClient
from
paddle_serving_server.pipeline
import
PipelineClient
except
ImportError
:
from
paddle_serving_server.pipeline
import
PipelineClient
import
numpy
as
np
...
...
python/examples/pipeline/imagenet/resnet50_web_service.py
浏览文件 @
e1c47de4
...
...
@@ -14,7 +14,7 @@
import
sys
from
paddle_serving_app.reader
import
Sequential
,
URL2Image
,
Resize
,
CenterCrop
,
RGB2BGR
,
Transpose
,
Div
,
Normalize
,
Base64ToImage
try
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
,
Op
from
paddle_serving_server.web_service
import
WebService
,
Op
except
ImportError
:
from
paddle_serving_server.web_service
import
WebService
,
Op
import
logging
...
...
python/examples/pipeline/imdb_model_ensemble/test_pipeline_server.py
浏览文件 @
e1c47de4
...
...
@@ -22,7 +22,7 @@ import logging
try
:
from
paddle_serving_server.web_service
import
WebService
except
ImportError
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
_LOGGER
=
logging
.
getLogger
()
user_handler
=
logging
.
StreamHandler
()
...
...
python/examples/pipeline/ocr/pipeline_rpc_client.py
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
try
:
from
paddle_serving_server
_gpu
.pipeline
import
PipelineClient
from
paddle_serving_server.pipeline
import
PipelineClient
except
ImportError
:
from
paddle_serving_server.pipeline
import
PipelineClient
import
numpy
as
np
...
...
python/examples/pipeline/ocr/web_service.py
浏览文件 @
e1c47de4
...
...
@@ -14,7 +14,7 @@
try
:
from
paddle_serving_server.web_service
import
WebService
,
Op
except
ImportError
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
,
Op
from
paddle_serving_server.web_service
import
WebService
,
Op
import
logging
import
numpy
as
np
import
cv2
...
...
python/examples/pipeline/simple_web_service/web_service.py
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
try
:
from
paddle_serving_server
_gpu
.web_service
import
WebService
,
Op
from
paddle_serving_server.web_service
import
WebService
,
Op
except
ImportError
:
from
paddle_serving_server.web_service
import
WebService
,
Op
import
logging
...
...
python/examples/resnet_v2_50/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf resnet_v2_50_imagenet.tar.gz
### Start Service
```
python -m paddle_serving_server
_gpu
.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 9393
python -m paddle_serving_server.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 9393
```
### Client Prediction
...
...
python/examples/resnet_v2_50/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf resnet_v2_50_imagenet.tar.gz
### 启动服务端
```
python -m paddle_serving_server
_gpu
.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 9393
python -m paddle_serving_server.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 9393
```
### 客户端预测
...
...
python/examples/unet_for_image_seg/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf unet.tar.gz
### Start Service
```
python -m paddle_serving_server
_gpu
.serve --model unet_model --gpu_ids 0 --port 9494
python -m paddle_serving_server.serve --model unet_model --gpu_ids 0 --port 9494
```
### Client Prediction
...
...
python/examples/unet_for_image_seg/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf unet.tar.gz
### 启动服务端
```
python -m paddle_serving_server
_gpu
.serve --model unet_model --gpu_ids 0 --port 9494
python -m paddle_serving_server.serve --model unet_model --gpu_ids 0 --port 9494
```
### 客户端预测
...
...
python/examples/xpu/fit_a_line_xpu/README.md
浏览文件 @
e1c47de4
...
...
@@ -15,7 +15,7 @@ sh get_data.sh
### Start server
```
shell
python
-m
paddle_serving_server
_gpu
.serve
--model
uci_housing_model
--thread
10
--port
9393
--use_lite
--use_xpu
--ir_optim
python
-m
paddle_serving_server.serve
--model
uci_housing_model
--thread
10
--port
9393
--use_lite
--use_xpu
--ir_optim
```
### Client prediction
...
...
python/examples/xpu/fit_a_line_xpu/test_server.py
浏览文件 @
e1c47de4
...
...
@@ -13,7 +13,7 @@
# limitations under the License.
# pylint: disable=doc-string-missing
from
paddle_serving_server
_gpu
.web_service
import
WebService
from
paddle_serving_server.web_service
import
WebService
import
numpy
as
np
...
...
python/examples/xpu/resnet_v2_50_xpu/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf resnet_v2_50_imagenet.tar.gz
### Start Service
```
python -m paddle_serving_server
_gpu
.serve --model resnet_v2_50_imagenet_model --port 9393 --use_lite --use_xpu --ir_optim
python -m paddle_serving_server.serve --model resnet_v2_50_imagenet_model --port 9393 --use_lite --use_xpu --ir_optim
```
### Client Prediction
...
...
python/examples/xpu/resnet_v2_50_xpu/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf resnet_v2_50_imagenet.tar.gz
### 启动服务端
```
python -m paddle_serving_server
_gpu
.serve --model resnet_v2_50_imagenet_model --port 9393 --use_lite --use_xpu --ir_optim
python -m paddle_serving_server.serve --model resnet_v2_50_imagenet_model --port 9393 --use_lite --use_xpu --ir_optim
```
### 客户端预测
...
...
python/examples/yolov4/README.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf yolov4.tar.gz
## Start RPC Service
```
python -m paddle_serving_server
_gpu
.serve --model yolov4_model --port 9393 --gpu_ids 0
python -m paddle_serving_server.serve --model yolov4_model --port 9393 --gpu_ids 0
```
## Prediction
...
...
python/examples/yolov4/README_CN.md
浏览文件 @
e1c47de4
...
...
@@ -12,7 +12,7 @@ tar -xzvf yolov4.tar.gz
## 启动RPC服务
```
python -m paddle_serving_server
_gpu
.serve --model yolov4_model --port 9393 --gpu_ids 0
python -m paddle_serving_server.serve --model yolov4_model --port 9393 --gpu_ids 0
```
## 预测
...
...
python/paddle_serving_client/__init__.py
浏览文件 @
e1c47de4
此差异已折叠。
点击以展开。
python/paddle_serving_client/client.py
0 → 100644
浏览文件 @
e1c47de4
此差异已折叠。
点击以展开。
python/paddle_serving_server/serve.py
浏览文件 @
e1c47de4
...
...
@@ -23,7 +23,6 @@ import json
import
base64
import
time
from
multiprocessing
import
Pool
,
Process
from
paddle_serving_server
import
serve_args
from
flask
import
Flask
,
request
import
sys
if
sys
.
version_info
.
major
==
2
:
...
...
@@ -91,7 +90,58 @@ def serve_args():
help
=
"container_id for authentication"
)
return
parser
.
parse_args
()
def
start_gpu_card_model
(
port
,
args
,
index
=
0
,
gpuid
):
# pylint: disable=doc-string-missing
def
start_standard_model
(
serving_port
):
# pylint: disable=doc-string-missing
args
=
parse_args
()
thread_num
=
args
.
thread
model
=
args
.
model
port
=
serving_port
workdir
=
args
.
workdir
device
=
args
.
device
mem_optim
=
args
.
mem_optim_off
is
False
ir_optim
=
args
.
ir_optim
max_body_size
=
args
.
max_body_size
use_mkl
=
args
.
use_mkl
use_encryption_model
=
args
.
use_encryption_model
use_multilang
=
args
.
use_multilang
if
model
==
""
:
print
(
"You must specify your serving model"
)
exit
(
-
1
)
import
paddle_serving_server
as
serving
op_maker
=
serving
.
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
general_infer_op
=
op_maker
.
create
(
'general_infer'
)
general_response_op
=
op_maker
.
create
(
'general_response'
)
op_seq_maker
=
serving
.
OpSeqMaker
()
op_seq_maker
.
add_op
(
read_op
)
op_seq_maker
.
add_op
(
general_infer_op
)
op_seq_maker
.
add_op
(
general_response_op
)
server
=
None
if
use_multilang
:
server
=
serving
.
MultiLangServer
()
else
:
server
=
serving
.
Server
()
server
.
set_op_sequence
(
op_seq_maker
.
get_op_sequence
())
server
.
set_num_threads
(
thread_num
)
server
.
set_memory_optimize
(
mem_optim
)
server
.
set_ir_optimize
(
ir_optim
)
server
.
use_mkl
(
use_mkl
)
server
.
set_max_body_size
(
max_body_size
)
server
.
set_port
(
port
)
server
.
use_encryption_model
(
use_encryption_model
)
if
args
.
product_name
!=
None
:
server
.
set_product_name
(
args
.
product_name
)
if
args
.
container_id
!=
None
:
server
.
set_container_id
(
args
.
container_id
)
server
.
load_model_config
(
model
)
server
.
prepare_server
(
workdir
=
workdir
,
port
=
port
,
device
=
device
)
server
.
run_server
()
def
start_gpu_card_model
(
index
,
gpuid
,
port
,
args
):
# pylint: disable=doc-string-missing
workdir
=
args
.
workdir
gpuid
=
int
(
gpuid
)
device
=
"gpu"
...
...
@@ -113,7 +163,7 @@ def start_gpu_card_model(port, args, index = 0, gpuid): # pylint: disable=doc-s
print
(
"You must specify your serving model"
)
exit
(
-
1
)
import
paddle_serving_server
_gpu
as
serving
import
paddle_serving_server
as
serving
op_maker
=
serving
.
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
general_infer_op
=
op_maker
.
create
(
'general_infer'
)
...
...
python/pipeline/local_service_handler.py
浏览文件 @
e1c47de4
...
...
@@ -15,8 +15,8 @@
import
os
import
logging
import
multiprocessing
#from paddle_serving_server
_gpu
import OpMaker, OpSeqMaker
#from paddle_serving_server
_gpu
import Server as GpuServer
#from paddle_serving_server import OpMaker, OpSeqMaker
#from paddle_serving_server import Server as GpuServer
#from paddle_serving_server import Server as CpuServer
from
.
import
util
#from paddle_serving_app.local_predict import LocalPredictor
...
...
@@ -235,7 +235,7 @@ class LocalServiceHandler(object):
server
=
Server
()
else
:
#gpu or arm
from
paddle_serving_server
_gpu
import
OpMaker
,
OpSeqMaker
,
Server
from
paddle_serving_server
import
OpMaker
,
OpSeqMaker
,
Server
op_maker
=
OpMaker
()
read_op
=
op_maker
.
create
(
'general_reader'
)
general_infer_op
=
op_maker
.
create
(
'general_infer'
)
...
...
python/setup.py.server_gpu.in
浏览文件 @
e1c47de4
...
...
@@ -19,7 +19,7 @@ from __future__ import print_function
from setuptools import setup, Distribution, Extension
from setuptools import find_packages
from setuptools import setup
from paddle_serving_server
_gpu
.version import serving_server_version, cuda_version
from paddle_serving_server.version import serving_server_version, cuda_version
import util
if cuda_version != "trt":
...
...
@@ -27,34 +27,34 @@ if cuda_version != "trt":
max_version, mid_version, min_version = util.python_version()
# gen pipeline proto code
util.gen_pipeline_code("paddle_serving_server
_gpu
")
util.gen_pipeline_code("paddle_serving_server")
REQUIRED_PACKAGES = [
'six >= 1.10.0', 'protobuf >= 3.11.0', 'grpcio <= 1.33.2', 'grpcio-tools <= 1.33.2',
'flask >= 1.1.1', 'func_timeout', 'pyyaml'
]
packages=['paddle_serving_server
_gpu
',
'paddle_serving_server
_gpu
.proto',
'paddle_serving_server
_gpu
.pipeline',
'paddle_serving_server
_gpu
.pipeline.proto',
'paddle_serving_server
_gpu
.pipeline.gateway',
'paddle_serving_server
_gpu
.pipeline.gateway.proto']
packages=['paddle_serving_server',
'paddle_serving_server.proto',
'paddle_serving_server.pipeline',
'paddle_serving_server.pipeline.proto',
'paddle_serving_server.pipeline.gateway',
'paddle_serving_server.pipeline.gateway.proto']
package_dir={'paddle_serving_server
_gpu
':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server
_gpu
',
'paddle_serving_server
_gpu
.proto':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server
_gpu
/proto',
'paddle_serving_server
_gpu
.pipeline':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server
_gpu
/pipeline',
'paddle_serving_server
_gpu
.pipeline.proto':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server
_gpu
/pipeline/proto',
'paddle_serving_server
_gpu
.pipeline.gateway':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server
_gpu
/pipeline/gateway',
'paddle_serving_server
_gpu
.pipeline.gateway.proto':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server
_gpu
/pipeline/gateway/proto'}
package_dir={'paddle_serving_server':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server',
'paddle_serving_server.proto':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server/proto',
'paddle_serving_server.pipeline':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server/pipeline',
'paddle_serving_server.pipeline.proto':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server/pipeline/proto',
'paddle_serving_server.pipeline.gateway':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server/pipeline/gateway',
'paddle_serving_server.pipeline.gateway.proto':
'${PADDLE_SERVING_BINARY_DIR}/python/paddle_serving_server/pipeline/gateway/proto'}
package_data={'paddle_serving_server
_gpu
': ['pipeline/gateway/libproxy_server.so'],}
package_data={'paddle_serving_server': ['pipeline/gateway/libproxy_server.so'],}
setup(
name='paddle-serving-server-gpu',
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录