未验证 提交 dfd7f014 编写于 作者: T Thomas Young 提交者: GitHub

Merge pull request #9 from PaddlePaddle/develop

Develop
......@@ -13,7 +13,7 @@
<a href="https://travis-ci.com/PaddlePaddle/Serving">
<img alt="Build Status" src="https://img.shields.io/travis/com/PaddlePaddle/Serving/develop">
</a>
<img alt="Release" src="https://img.shields.io/badge/Release-0.0.3-yellowgreen">
<img alt="Release" src="https://img.shields.io/badge/Release-0.6.2-yellowgreen">
<img alt="Issues" src="https://img.shields.io/github/issues/PaddlePaddle/Serving">
<img alt="License" src="https://img.shields.io/github/license/PaddlePaddle/Serving">
<img alt="Slack" src="https://img.shields.io/badge/Join-Slack-green">
......@@ -86,15 +86,15 @@ We **highly recommend** you to **run Paddle Serving in Docker**, please visit [R
```
# Run CPU Docker
docker pull registry.baidubce.com/paddlepaddle/serving:0.6.0-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.0-devel bash
docker pull registry.baidubce.com/paddlepaddle/serving:0.6.2-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.2-devel bash
docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
```
# Run GPU Docker
nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.6.0-cuda10.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.0-cuda10.2-cudnn8-devel bash
nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.6.2-cuda10.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.2-cuda10.2-cudnn8-devel bash
nvidia-docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
......@@ -105,13 +105,13 @@ pip3 install -r python/requirements.txt
```
```shell
pip3 install paddle-serving-client==0.6.0
pip3 install paddle-serving-server==0.6.0 # CPU
pip3 install paddle-serving-app==0.6.0
pip3 install paddle-serving-server-gpu==0.6.0.post102 #GPU with CUDA10.2 + TensorRT7
pip3 install paddle-serving-client==0.6.2
pip3 install paddle-serving-server==0.6.2 # CPU
pip3 install paddle-serving-app==0.6.2
pip3 install paddle-serving-server-gpu==0.6.2.post102 #GPU with CUDA10.2 + TensorRT7
# DO NOT RUN ALL COMMANDS! check your GPU env and select the right one
pip3 install paddle-serving-server-gpu==0.6.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.6.0.post11 # GPU with CUDA10.1 + TensorRT7
pip3 install paddle-serving-server-gpu==0.6.2.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.6.2.post11 # GPU with CUDA10.1 + TensorRT7
```
You may need to use a domestic mirror source (in China, you can use the Tsinghua mirror source, add `-i https://pypi.tuna.tsinghua.edu.cn/simple` to pip command) to speed up the download.
......@@ -259,7 +259,7 @@ output
### Developers
- [How to deploy Paddle Serving on K8S?(Chinese)](doc/PADDLE_SERVING_ON_KUBERNETES.md)
- [How to route Paddle Serving to secure endpoint?(Chinese)](doc/SERVIING_AUTH_DOCKER.md)
- [How to route Paddle Serving to secure endpoint?(Chinese)](doc/SERVING_AUTH_DOCKER.md)
- [How to develop a new Web Service?](doc/NEW_WEB_SERVICE.md)
- [Compile from source code](doc/COMPILE.md)
- [Develop Pipeline Serving](doc/PIPELINE_SERVING.md)
......
......@@ -87,15 +87,15 @@ Paddle Serving开发者为您提供了简单易用的[AIStudio教程-Paddle Serv
```
# 启动 CPU Docker
docker pull registry.baidubce.com/paddlepaddle/serving:0.6.0-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.0-devel bash
docker pull registry.baidubce.com/paddlepaddle/serving:0.6.2-devel
docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.2-devel bash
docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
```
# 启动 GPU Docker
nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.6.0-cuda10.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.0-cuda10.2-cudnn8-devel bash
nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.6.2-cuda10.2-cudnn8-devel
nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.6.2-cuda10.2-cudnn8-devel bash
nvidia-docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
......@@ -107,13 +107,13 @@ pip3 install -r python/requirements.txt
```
```shell
pip3 install paddle-serving-client==0.6.0
pip3 install paddle-serving-server==0.6.0 # CPU
pip3 install paddle-serving-app==0.6.0
pip3 install paddle-serving-server-gpu==0.6.0.post102 #GPU with CUDA10.2 + TensorRT7
pip3 install paddle-serving-client==0.6.2
pip3 install paddle-serving-server==0.6.2 # CPU
pip3 install paddle-serving-app==0.6.2
pip3 install paddle-serving-server-gpu==0.6.2.post102 #GPU with CUDA10.2 + TensorRT7
# 其他GPU环境需要确认环境再选择执行哪一条
pip3 install paddle-serving-server-gpu==0.6.0.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.6.0.post11 # GPU with CUDA10.1 + TensorRT7
pip3 install paddle-serving-server-gpu==0.6.2.post101 # GPU with CUDA10.1 + TensorRT6
pip3 install paddle-serving-server-gpu==0.6.2.post11 # GPU with CUDA10.1 + TensorRT7
```
您可能需要使用国内镜像源(例如清华源, 在pip命令中添加`-i https://pypi.tuna.tsinghua.edu.cn/simple`)来加速下载。
......@@ -124,7 +124,7 @@ paddle-serving-server和paddle-serving-server-gpu安装包支持Centos 6/7, Ubun
paddle-serving-client和paddle-serving-app安装包支持Linux和Windows,其中paddle-serving-client仅支持python3.6/3.7/3.8。
**最新的0.6.0的版本,已经不支持Cuda 9.0和Cuda 10.0,Python已不支持2.7和3.5。**
**最新的0.6.2的版本,已经不支持Cuda 9.0和Cuda 10.0,Python已不支持2.7和3.5。**
推荐安装2.1.0及以上版本的paddle
......@@ -262,7 +262,7 @@ python3 pipeline_rpc_client.py
- [如何编译PaddleServing?](doc/COMPILE_CN.md)
- [如何开发Pipeline?](doc/PIPELINE_SERVING_CN.md)
- [如何在K8S集群上部署Paddle Serving?](doc/PADDLE_SERVING_ON_KUBERNETES.md)
- [如何在Paddle Serving上部署安全网关?](doc/SERVIING_AUTH_DOCKER.md)
- [如何在Paddle Serving上部署安全网关?](doc/SERVING_AUTH_DOCKER.md)
- [如何开发Pipeline?](doc/PIPELINE_SERVING_CN.md)
- [如何使用uWSGI部署Web Service](doc/UWSGI_DEPLOY_CN.md)
- [如何实现模型文件热加载](doc/HOT_LOADING_IN_SERVING_CN.md)
......
......@@ -29,10 +29,12 @@ You can get images in two ways:
Runtime images cannot be used for compilation.
If you want to customize your Serving based on source code, use the version with the suffix - devel.
**cuda10.1-cudnn7-gcc54 image is not ready, you should run from dockerfile if you need it.**
| Description | OS | TAG | Dockerfile |
| :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: |
| CPU development | Ubuntu16 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6-gcc54) development | Ubuntu16 | latest-cuda10.1-cudnn7-gcc54-devel | [Dockerfile.cuda10.1-cudnn7-gcc54.devel](../tools/Dockerfile.cuda10.1-cudnn7-gcc54.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6-gcc54) development | Ubuntu16 | latest-cuda10.1-cudnn7-gcc54-devel(not ready) | [Dockerfile.cuda10.1-cudnn7-gcc54.devel](../tools/Dockerfile.cuda10.1-cudnn7-gcc54.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6) development | Ubuntu16 | latest-cuda10.1-cudnn7-devel | [Dockerfile.cuda10.1-cudnn7.devel](../tools/Dockerfile.cuda10.1-cudnn7.devel) |
| GPU (cuda10.2-cudnn8-tensorRT7) development | Ubuntu16 | latest-cuda10.2-cudnn8-devel | [Dockerfile.cuda10.2-cudnn8.devel](../tools/Dockerfile.cuda10.2-cudnn8.devel) |
| GPU (cuda11-cudnn8-tensorRT7) development | Ubuntu18 | latest-cuda11-cudnn8-devel | [Dockerfile.cuda11-cudnn8.devel](../tools/Dockerfile.cuda11-cudnn8.devel) |
......@@ -62,18 +64,33 @@ Develop Images:
| Env | Version | Docker images tag | OS | Gcc Version |
|----------|---------|------------------------------|-----------|-------------|
| CPU | >=0.5.0 | 0.6.0-devel | Ubuntu 16 | 8.2.0 |
| CPU | >=0.5.0 | 0.6.2-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-devel | CentOS 7 | 4.8.5 |
| Cuda10.1 | >=0.5.0 | 0.6.0-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | 0.6.0 | 0.6.0-cuda10.1-cudnn7-gcc54-devel | Ubuntu 16 | 5.4.0 |
| | <=0.4.0 | 0.6.0-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| Cuda10.2 | >=0.5.0 | 0.6.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| Cuda10.1 | >=0.5.0 | 0.6.2-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | 0.6.2 | 0.6.2-cuda10.1-cudnn7-gcc54-devel(not ready) | Ubuntu 16 | 5.4.0 |
| | <=0.4.0 | 0.6.2-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| Cuda10.2 | >=0.5.0 | 0.6.2-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
| Cuda11.0 | >=0.5.0 | 0.6.0-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 |
| Cuda11.0 | >=0.5.0 | 0.6.2-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
Running Images:
Running Images is lighter than Develop Images, and Running Images are too many due to multiple combinations of python, device environment. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](PADDLE_SERVING_ON_KUBERNETES.md).
Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](PADDLE_SERVING_ON_KUBERNETES.md).
| ENV | Python Version | Tag |
|------------------------------------------|----------------|-----------------------------|
| cpu | 3.6 | 0.6.2-py36-runtime |
| cpu | 3.7 | 0.6.2-py37-runtime |
| cpu | 3.8 | 0.6.2-py38-runtime |
| cuda-10.1 + cudnn-7.6.5 + tensorrt-6.0.1 | 3.6 | 0.6.2-cuda10.1-py36-runtime |
| cuda-10.1 + cudnn-7.6.5 + tensorrt-6.0.1 | 3.7 | 0.6.2-cuda10.1-py37-runtime |
| cuda-10.1 + cudnn-7.6.5 + tensorrt-6.0.1 | 3.8 | 0.6.2-cuda10.1-py38-runtime |
| cuda-10.2 + cudnn-8.2.0 + tensorrt-7.1.3 | 3.6 | 0.6.2-cuda10.2-py36-runtime |
| cuda-10.2 + cudnn-8.2.0 + tensorrt-7.1.3 | 3.7 | 0.6.2-cuda10.2-py37-runtime |
| cuda-10.2 + cudnn-8.2.0 + tensorrt-7.1.3 | 3.8 | 0.6.2-cuda10.2-py38-runtime |
| cuda-11 + cudnn-8.0.5 + tensorrt-7.1.3 | 3.6 | 0.6.2-cuda11-py36-runtime |
| cuda-11 + cudnn-8.0.5 + tensorrt-7.1.3 | 3.7 | 0.6.2-cuda11-py37-runtime |
| cuda-11 + cudnn-8.0.5 + tensorrt-7.1.3 | 3.8 | 0.6.2-cuda11-py38-runtime |
**Tips:** If you want to use CPU server and GPU server (version>=0.5.0) at the same time, you should check the gcc version, only Cuda10.1/10.2/11 can run with CPU server owing to the same gcc version(8.2).
......@@ -31,11 +31,12 @@
若需要基于源代码二次开发编译,请使用后缀为-devel的版本。
**在TAG列,latest也可以替换成对应的版本号,例如0.5.0/0.4.1等,但需要注意的是,部分开发环境随着某个版本迭代才增加,因此并非所有环境都有对应的版本号可以使用。**
**cuda10.1-cudnn7-gcc54环境尚未同步到镜像仓库,如果您需要相关镜像请运行相关dockerfile**
| 镜像选择 | 操作系统 | TAG | Dockerfile |
| :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: |
| CPU development | Ubuntu16 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6-gcc54) development | Ubuntu16 | latest-cuda10.1-cudnn7-gcc54-devel | [Dockerfile.cuda10.1-cudnn7-gcc54.devel](../tools/Dockerfile.cuda10.1-cudnn7-gcc54.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6-gcc54) development | Ubuntu16 | latest-cuda10.1-cudnn7-gcc54-devel (not ready) | [Dockerfile.cuda10.1-cudnn7-gcc54.devel](../tools/Dockerfile.cuda10.1-cudnn7-gcc54.devel) |
| GPU (cuda10.1-cudnn7-tensorRT6) development | Ubuntu16 | latest-cuda10.1-cudnn7-devel | [Dockerfile.cuda10.1-cudnn7.devel](../tools/Dockerfile.cuda10.1-cudnn7.devel) |
| GPU (cuda10.2-cudnn8-tensorRT7) development | Ubuntu16 | latest-cuda10.2-cudnn8-devel | [Dockerfile.cuda10.2-cudnn8.devel](../tools/Dockerfile.cuda10.2-cudnn8.devel) |
| GPU (cuda11-cudnn8-tensorRT7) development | Ubuntu18 | latest-cuda11-cudnn8-devel | [Dockerfile.cuda11-cudnn8.devel](../tools/Dockerfile.cuda11-cudnn8.devel) |
......@@ -68,18 +69,32 @@ registry.baidubce.com/paddlepaddle/serving:xpu-x86 # for x86 xpu user
| Env | Version | Docker images tag | OS | Gcc Version |
|----------|---------|------------------------------|-----------|-------------|
| CPU | >=0.5.0 | 0.6.0-devel | Ubuntu 16 | 8.2.0 |
| CPU | >=0.5.0 | 0.6.2-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.4.0-devel | CentOS 7 | 4.8.5 |
| Cuda10.1 | >=0.5.0 | 0.6.0-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | 0.6.0 | 0.6.0-cuda10.1-cudnn7-gcc54-devel | Ubuntu 16 | 5.4.0 |
| | <=0.4.0 | 0.6.0-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| Cuda10.2 | >=0.5.0 | 0.6.0-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| Cuda10.1 | >=0.5.0 | 0.6.2-cuda10.1-cudnn7-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | 0.6.2-cuda10.1-cudnn7-devel | CentOS 7 | 4.8.5 |
| Cuda10.2 | >=0.5.0 | 0.6.2-cuda10.2-cudnn8-devel | Ubuntu 16 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
| Cuda11.0 | >=0.5.0 | 0.6.0-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 |
| Cuda11.0 | >=0.5.0 | 0.6.2-cuda11.0-cudnn8-devel | Ubuntu 18 | 8.2.0 |
| | <=0.4.0 | Nan | Nan | Nan |
运行镜像:
运行镜像比开发镜像更加轻量化, 且由于python,运行环境的多种组合,进而导致运行镜像种类过多。 如果您想了解有关信息,请检查文档[在Kubernetes上使用Paddle Serving](PADDLE_SERVING_ON_KUBERNETES.md)
运行镜像比开发镜像更加轻量化, 运行镜像提供了serving的whl和bin,但为了运行期更小的镜像体积,没有提供诸如cmake这样但开发工具。 如果您想了解有关信息,请检查文档[在Kubernetes上使用Paddle Serving](PADDLE_SERVING_ON_KUBERNETES.md)
| ENV | Python Version | Tag |
|------------------------------------------|----------------|-----------------------------|
| cpu | 3.6 | 0.6.2-py36-runtime |
| cpu | 3.7 | 0.6.2-py37-runtime |
| cpu | 3.8 | 0.6.2-py38-runtime |
| cuda-10.1 + cudnn-7.6.5 + tensorrt-6.0.1 | 3.6 | 0.6.2-cuda10.1-py36-runtime |
| cuda-10.1 + cudnn-7.6.5 + tensorrt-6.0.1 | 3.7 | 0.6.2-cuda10.1-py37-runtime |
| cuda-10.1 + cudnn-7.6.5 + tensorrt-6.0.1 | 3.8 | 0.6.2-cuda10.1-py38-runtime |
| cuda-10.2 + cudnn-8.2.0 + tensorrt-7.1.3 | 3.6 | 0.6.2-cuda10.2-py36-runtime |
| cuda-10.2 + cudnn-8.2.0 + tensorrt-7.1.3 | 3.7 | 0.6.2-cuda10.2-py37-runtime |
| cuda-10.2 + cudnn-8.2.0 + tensorrt-7.1.3 | 3.8 | 0.6.2-cuda10.2-py38-runtime |
| cuda-11 + cudnn-8.0.5 + tensorrt-7.1.3 | 3.6 | 0.6.2-cuda11-py36-runtime |
| cuda-11 + cudnn-8.0.5 + tensorrt-7.1.3 | 3.7 | 0.6.2-cuda11-py37-runtime |
| cuda-11 + cudnn-8.0.5 + tensorrt-7.1.3 | 3.8 | 0.6.2-cuda11-py38-runtime |
**注意事项:** 如果您在0.5.0及以上版本需要在一个容器当中同时运行CPU server和GPU server,需要选择Cuda10.1/10.2/11的镜像,因为他们和CPU环境有着相同版本的gcc。
......@@ -32,9 +32,7 @@ The `-p` option is to map the `9292` port of the container to the `9292` port of
### Install PaddleServing
The mirror comes with `paddle_serving_server`, `paddle_serving_client`, and `paddle_serving_app` corresponding to the mirror tag version. If users don’t need to change the version, they can use it directly, which is suitable for environments without extranet services.
If you need to change the version, please refer to the instructions on the homepage to download the pip package of the corresponding version.
Please refer to the instructions on the homepage to download the pip package of the corresponding version.
## GPU
......
......@@ -59,9 +59,7 @@ docker exec -it test bash
### 安装PaddleServing
镜像里自带对应镜像tag版本的`paddle_serving_server_gpu``paddle_serving_client``paddle_serving_app`,如果用户不需要更改版本,可以直接使用,适用于没有外网服务的环境。
如果需要更换版本,请参照首页的指导,下载对应版本的pip包。[最新安装包合集](LATEST_PACKAGES.md)
请参照首页的指导,下载对应版本的pip包。[最新安装包合集](LATEST_PACKAGES.md)
## 注意事项
......
......@@ -83,7 +83,7 @@ def multithread_http(thread, batch_size):
print("Total cost: {}s".format(total_cost))
print("Each thread cost: {}s. ".format(avg_cost))
print("Total count: {}. ".format(total_number))
print("AVG QPS: {} samples/s".format(batch_size * total_number /
print("AVG_QPS: {} samples/s".format(batch_size * total_number /
total_cost))
show_latency(result[1])
......
cuda_version: "10.1"
cudnn_version: "7.6"
trt_version: "6.0"
python_version: "3.7"
gcc_version: "8.2"
paddle_version: "2.0.1"
cpu: "Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz X12"
gpu: "T4"
xpu: "None"
api: ""
owner: "cuicheng01"
model_name: "imagenet"
model_type: "static"
model_source: "PaddleClas"
model_url: ""
batch_size: 1
num_of_samples: 1000
input_shape: "3,224,224"
runtime_device: "cpu"
ir_optim: true
enable_memory_optim: true
enable_tensorrt: false
precision: "fp32"
enable_mkldnn: false
cpu_math_library_num_threads: ""
......@@ -175,6 +175,6 @@ class OcrService(WebService):
return rec_op
uci_service = OcrService(name="ocr")
uci_service.prepare_pipeline_config("config.yml")
uci_service.run_service()
ocr_service = OcrService(name="ocr")
ocr_service.prepare_pipeline_config("config.yml")
ocr_service.run_service()
......@@ -35,6 +35,7 @@ import numpy as np
import grpc
import sys
import collections
import subprocess
from multiprocessing import Pool, Process
from concurrent import futures
......@@ -330,12 +331,21 @@ class Server(object):
def use_mkl(self, flag):
self.mkl_flag = flag
def check_avx(self):
p = subprocess.Popen(['cat /proc/cpuinfo | grep avx 2>/dev/null'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
out, err = p.communicate()
if err == b'' and len(out) > 0:
return True
else:
return False
def get_device_version(self):
avx_flag = False
mkl_flag = self.mkl_flag
r = os.system("cat /proc/cpuinfo | grep avx > /dev/null 2>&1")
if r == 0:
avx_support = self.check_avx()
if avx_support:
avx_flag = True
self.use_mkl(True)
mkl_flag = self.mkl_flag
if avx_flag:
if mkl_flag:
device_version = "cpu-avx-mkl"
......@@ -665,7 +675,7 @@ class MultiLangServer(object):
use_encryption_model=False,
cube_conf=None):
if not self._port_is_available(port):
raise SystemExit("Prot {} is already used".format(port))
raise SystemExit("Port {} is already used".format(port))
default_port = 12000
self.port_list_ = []
for i in range(1000):
......
......@@ -3,7 +3,7 @@ echo "################################################################"
echo "# #"
echo "# #"
echo "# #"
echo "# Paddle Serving begin run with python3.6.8! #"
echo "# Paddle Serving begin run with python$1! #"
echo "# #"
echo "# #"
echo "# #"
......@@ -18,6 +18,7 @@ build_path=/workspace/Serving/
error_words="Fail|DENIED|UNKNOWN|None"
log_dir=${build_path}logs/
data=/root/.cache/serving_data/
OPENCV_DIR=${data}/opencv-3.4.7/opencv3
dir=`pwd`
RED_COLOR='\E[1;31m'
GREEN_COLOR='\E[1;32m'
......@@ -40,8 +41,8 @@ build_whl_list=(build_cpu_server build_gpu_server build_client build_app)
rpc_model_list=(grpc_fit_a_line grpc_yolov4 pipeline_imagenet bert_rpc_gpu bert_rpc_cpu ResNet50_rpc \
lac_rpc cnn_rpc bow_rpc lstm_rpc fit_a_line_rpc deeplabv3_rpc mobilenet_rpc unet_rpc resnetv2_rpc \
criteo_ctr_rpc_cpu criteo_ctr_rpc_gpu ocr_rpc yolov4_rpc_gpu faster_rcnn_hrnetv2p_w18_1x_encrypt \
low_precision_resnet50_int8)
http_model_list=(fit_a_line_http lac_http cnn_http bow_http lstm_http ResNet50_http bert_http\
low_precision_resnet50_int8 ocr_c++_service)
http_model_list=(fit_a_line_http lac_http cnn_http bow_http lstm_http ResNet50_http bert_http \
pipeline_ocr_cpu_http)
function setproxy() {
......@@ -60,6 +61,46 @@ function kill_server_process() {
echo -e "${GREEN_COLOR}process killed...${RES}"
}
function set_env() {
if [ $1 == 36 ]; then
export PYTHONROOT=/usr/local/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.6m.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.6
py_version="python3.6"
elif [ $1 == 37 ]; then
export PYTHONROOT=/usr/local/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.7m
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.7m.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.7
py_version="python3.7"
elif [ $1 == 38 ]; then
export PYTHONROOT=/usr/local/
export PYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.8
export PYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.8.so
export PYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.8
py_version="python3.8"
else
echo -e "${RED_COLOR}Error py version$1${RES}"
exit
fi
export CUDA_PATH='/usr/local/cuda'
export CUDNN_LIBRARY='/usr/local/cuda/lib64/'
export CUDA_CUDART_LIBRARY="/usr/local/cuda/lib64/"
if [ $2 == 101 ]; then
export TENSORRT_LIBRARY_PATH="/usr/local/TensorRT6-cuda10.1-cudnn7/targets/x86_64-linux-gnu/"
elif [ $2 == 102 ]; then
export TENSORRT_LIBRARY_PATH="/usr/local/TensorRT-7.1.3.4/targets/x86_64-linux-gnu/"
elif [ $2 == 110 ]; then
export TENSORRT_LIBRARY_PATH="/usr/local/TensorRT-7.1.3.4/targets/x86_64-linux-gnu/"
elif [ $2 == "cpu" ]; then
export TENSORRT_LIBRARY_PATH="/usr/local/TensorRT6-cuda9.0-cudnn7/targets/x86_64-linux-gnu"
else
echo -e "${RED_COLOR}Error cuda version$1${RES}"
exit
fi
}
function check() {
cd ${build_path}
if [ ! -f paddle_serving_app* ]; then
......@@ -91,6 +132,9 @@ function check_result() {
grep -E "${error_words}" ${dir}client_log.txt > /dev/null
if [ $? == 0 ]; then
echo -e "${RED_COLOR}$1 error command${RES}\n" | tee -a ${log_dir}server_total.txt ${log_dir}client_total.txt
echo -e "--------------pipeline.log:----------------\n"
cat PipelineServingLogs/pipeline.log
echo -e "-------------------------------------------\n"
error_log $2
else
echo -e "${GREEN_COLOR}$2${RES}\n" | tee -a ${log_dir}server_total.txt ${log_dir}client_total.txt
......@@ -117,7 +161,7 @@ function error_log() {
fi
echo "model: ${model// /_}" | tee -a ${log_dir}error_models.txt
echo "deployment: ${deployment// /_}" | tee -a ${log_dir}error_models.txt
echo "py_version: python3.6" | tee -a ${log_dir}error_models.txt
echo "py_version: ${py_version}" | tee -a ${log_dir}error_models.txt
echo "cuda_version: ${cuda_version}" | tee -a ${log_dir}error_models.txt
echo "status: Failed" | tee -a ${log_dir}error_models.txt
echo -e "-----------------------------\n\n" | tee -a ${log_dir}error_models.txt
......@@ -147,22 +191,27 @@ function link_data() {
function before_hook() {
setproxy
unsetproxy
cd ${build_path}/python
python3.6 -m pip install --upgrade pip
python3.6 -m pip install requests
python3.6 -m pip install -r requirements.txt -i https://mirror.baidu.com/pypi/simple
python3.6 -m pip install numpy==1.16.4
python3.6 -m pip install paddlehub -i https://mirror.baidu.com/pypi/simple
${py_version} -m pip install --upgrade pip
${py_version} -m pip install requests
${py_version} -m pip install -r requirements.txt
${py_version} -m pip install numpy==1.16.4
${py_version} -m pip install paddlehub
echo "before hook configuration is successful.... "
}
function run_env() {
setproxy
python3.6 -m pip install --upgrade nltk==3.4
python3.6 -m pip install --upgrade scipy==1.2.1
python3.6 -m pip install --upgrade setuptools==41.0.0
python3.6 -m pip install paddlehub ujson paddlepaddle==2.0.2
${py_version} -m pip install nltk==3.4 -i https://mirror.baidu.com/pypi/simple
${py_version} -m pip install setuptools==41.0.0 -i https://mirror.baidu.com/pypi/simple
${py_version} -m pip install paddlehub -i https://mirror.baidu.com/pypi/simple
if [ ${py_version} == "python3.6" ]; then
${py_version} -m pip install paddlepaddle-gpu==2.1.0
elif [ ${py_version} == "python3.8" ]; then
cd ${build_path}
wget https://paddle-wheel.bj.bcebos.com/with-trt/2.1.0-gpu-cuda11.0-cudnn8-mkl-gcc8.2/paddlepaddle_gpu-2.1.0.post110-cp38-cp38-linux_x86_64.whl
${py_version} -m pip install paddlepaddle_gpu-2.1.0.post110-cp38-cp38-linux_x86_64.whl
fi
echo "run env configuration is successful.... "
}
......@@ -185,17 +234,22 @@ function build_gpu_server() {
rm -rf build
fi
mkdir build && cd build
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m/ \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib64/libpython3.6.so \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.6 \
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \
-DCUDNN_LIBRARY=${CUDNN_LIBRARY} \
-DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \
-DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON \
-DSERVER=ON \
-DTENSORRT_ROOT=/usr \
-DWITH_GPU=ON ..
make -j32
make -j32
make install -j32
python3.6 -m pip uninstall paddle-serving-server-gpu -y
python3.6 -m pip install ${build_path}/build/python/dist/*
${py_version} -m pip uninstall paddle-serving-server-gpu -y
${py_version} -m pip install ${build_path}/build/python/dist/*
cp ${build_path}/build/python/dist/* ../
cp -r ${build_path}/build/ ${build_path}/build_gpu
}
......@@ -210,17 +264,18 @@ function build_cpu_server(){
rm -rf build
fi
mkdir build && cd build
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m/ \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib64/libpython3.6.so \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.6 \
-DWITH_GPU=OFF \
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON \
-DSERVER=ON ..
make -j32
make -j32
make install -j32
cp ${build_path}/build/python/dist/* ../
python3.6 -m pip uninstall paddle-serving-server -y
python3.6 -m pip install ${build_path}/build/python/dist/*
${py_version} -m pip uninstall paddle-serving-server -y
${py_version} -m pip install ${build_path}/build/python/dist/*
cp -r ${build_path}/build/ ${build_path}/build_cpu
}
......@@ -231,34 +286,34 @@ function build_client() {
rm -rf build
fi
mkdir build && cd build
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m/ \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib64/libpython3.6.so \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.6 \
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DCLIENT=ON ..
make -j32
make -j32
cp ${build_path}/build/python/dist/* ../
python3.6 -m pip uninstall paddle-serving-client -y
python3.6 -m pip install ${build_path}/build/python/dist/*
${py_version} -m pip uninstall paddle-serving-client -y
${py_version} -m pip install ${build_path}/build/python/dist/*
}
function build_app() {
setproxy
python3.6 -m pip install paddlehub ujson Pillow
python3.6 -m pip install paddlepaddle==2.0.2
${py_version} -m pip install paddlehub ujson Pillow
${py_version} -m pip install paddlepaddle==2.0.2
cd ${build_path}
if [ -d build ];then
rm -rf build
fi
mkdir build && cd build
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python3.6m/ \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython3.6.so \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python3.6 \
-DCMAKE_INSTALL_PREFIX=./output -DAPP=ON ..
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DAPP=ON ..
make
cp ${build_path}/build/python/dist/* ../
python3.6 -m pip uninstall paddle-serving-app -y
python3.6 -m pip install ${build_path}/build/python/dist/*
${py_version} -m pip uninstall paddle-serving-app -y
${py_version} -m pip install ${build_path}/build/python/dist/*
}
function low_precision_resnet50_int8 () {
......@@ -267,12 +322,12 @@ function low_precision_resnet50_int8 () {
check_dir ${dir}
wget https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
tar zxvf ResNet50_quant.tar.gz
python3.6 -m paddle_serving_client.convert --dirname ResNet50_quant
${py_version} -m paddle_serving_client.convert --dirname ResNet50_quant
echo -e "${GREEN_COLOR}low_precision_resnet50_int8_GPU_RPC server started${RES}" | tee -a ${log_dir}server_total.txt
python3.6 -m paddle_serving_server.serve --model serving_server --port 9393 --gpu_ids 0 --use_trt --precision int8 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model serving_server --port 9393 --gpu_ids 0 --use_trt --precision int8 > ${dir}server_log.txt 2>&1 &
check_result server 10
echo -e "${GREEN_COLOR}low_precision_resnet50_int8_GPU_RPC client started${RES}" | tee -a ${log_dir}client_total.txt
python3.6 resnet50_client.py > ${dir}client_log.txt 2>&1
${py_version} resnet50_client.py > ${dir}client_log.txt 2>&1
check_result client "low_precision_resnet50_int8_GPU_RPC server test completed"
kill_server_process
}
......@@ -283,13 +338,13 @@ function faster_rcnn_hrnetv2p_w18_1x_encrypt() {
check_dir ${dir}
data_dir=${data}detection/faster_rcnn_hrnetv2p_w18_1x/
link_data ${data_dir}
python3.6 encrypt.py
${py_version} encrypt.py
unsetproxy
echo -e "${GREEN_COLOR}faster_rcnn_hrnetv2p_w18_1x_ENCRYPTION_GPU_RPC server started${RES}" | tee -a ${log_dir}server_total.txt
python3.6 -m paddle_serving_server.serve --model encrypt_server/ --port 9494 --use_trt --gpu_ids 0 --use_encryption_model > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model encrypt_server/ --port 9494 --use_trt --gpu_ids 0 --use_encryption_model > ${dir}server_log.txt 2>&1 &
check_result server 3
echo -e "${GREEN_COLOR}faster_rcnn_hrnetv2p_w18_1x_ENCRYPTION_GPU_RPC client started${RES}" | tee -a ${log_dir}client_total.txt
python3.6 test_encryption.py 000000570688.jpg > ${dir}client_log.txt 2>&1
${py_version} test_encryption.py 000000570688.jpg > ${dir}client_log.txt 2>&1
check_result client "faster_rcnn_hrnetv2p_w18_1x_ENCRYPTION_GPU_RPC server test completed"
kill_server_process
}
......@@ -322,10 +377,10 @@ function bert_rpc_gpu() {
sed -i 's/9292/8860/g' bert_client.py
sed -i '$aprint(result)' bert_client.py
ls -hlst
python3.6 -m paddle_serving_server.serve --model bert_seq128_model/ --port 8860 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model bert_seq128_model/ --port 8860 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
check_result server 15
nvidia-smi
head data-c.txt | python3.6 bert_client.py --model bert_seq128_client/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
head data-c.txt | ${py_version} bert_client.py --model bert_seq128_client/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
check_result client "bert_GPU_RPC server test completed"
nvidia-smi
kill_server_process
......@@ -339,10 +394,10 @@ function bert_rpc_cpu() {
data_dir=${data}bert/
link_data ${data_dir}
sed -i 's/8860/8861/g' bert_client.py
python3.6 -m paddle_serving_server.serve --model bert_seq128_model/ --port 8861 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model bert_seq128_model/ --port 8861 > ${dir}server_log.txt 2>&1 &
check_result server 5
cp data-c.txt.1 data-c.txt
head data-c.txt | python3.6 bert_client.py --model bert_seq128_client/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
head data-c.txt | ${py_version} bert_client.py --model bert_seq128_client/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
check_result client "bert_CPU_RPC server test completed"
kill_server_process
}
......@@ -354,12 +409,11 @@ function pipeline_imagenet() {
cd ${build_path}/python/examples/pipeline/imagenet
data_dir=${data}imagenet/
link_data ${data_dir}
python3.6 resnet50_web_service.py > ${dir}server_log.txt 2>&1 &
${py_version} resnet50_web_service.py > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
timeout 30s python3.6 pipeline_rpc_client.py > ${dir}client_log.txt 2>&1
timeout 30s ${py_version} pipeline_rpc_client.py > ${dir}client_log.txt 2>&1
echo "pipeline_log:-----------"
cat PipelineServingLogs/pipeline.log
check_result client "pipeline_imagenet_GPU_RPC server test completed"
nvidia-smi
kill_server_process
......@@ -373,10 +427,10 @@ function ResNet50_rpc() {
data_dir=${data}imagenet/
link_data ${data_dir}
sed -i 's/9696/8863/g' resnet50_rpc_client.py
python3.6 -m paddle_serving_server.serve --model ResNet50_vd_model --port 8863 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model ResNet50_vd_model --port 8863 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
python3.6 resnet50_rpc_client.py ResNet50_vd_client_config/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
${py_version} resnet50_rpc_client.py ResNet50_vd_client_config/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
check_result client "ResNet50_GPU_RPC server test completed"
nvidia-smi
kill_server_process
......@@ -390,10 +444,10 @@ function ResNet101_rpc() {
data_dir=${data}imagenet/
link_data ${data_dir}
sed -i "22cclient.connect(['127.0.0.1:8864'])" image_rpc_client.py
python3.6 -m paddle_serving_server.serve --model ResNet101_vd_model --port 8864 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model ResNet101_vd_model --port 8864 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
python3.6 image_rpc_client.py ResNet101_vd_client_config/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
${py_version} image_rpc_client.py ResNet101_vd_client_config/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
check_result client "ResNet101_GPU_RPC server test completed"
nvidia-smi
kill_server_process
......@@ -407,9 +461,9 @@ function cnn_rpc() {
data_dir=${data}imdb/
link_data ${data_dir}
sed -i 's/9292/8865/g' test_client.py
python3.6 -m paddle_serving_server.serve --model imdb_cnn_model/ --port 8865 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model imdb_cnn_model/ --port 8865 > ${dir}server_log.txt 2>&1 &
check_result server 5
head test_data/part-0 | python3.6 test_client.py imdb_cnn_client_conf/serving_client_conf.prototxt imdb.vocab > ${dir}client_log.txt 2>&1
head test_data/part-0 | ${py_version} test_client.py imdb_cnn_client_conf/serving_client_conf.prototxt imdb.vocab > ${dir}client_log.txt 2>&1
check_result client "cnn_CPU_RPC server test completed"
kill_server_process
}
......@@ -422,9 +476,9 @@ function bow_rpc() {
data_dir=${data}imdb/
link_data ${data_dir}
sed -i 's/8865/8866/g' test_client.py
python3.6 -m paddle_serving_server.serve --model imdb_bow_model/ --port 8866 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model imdb_bow_model/ --port 8866 > ${dir}server_log.txt 2>&1 &
check_result server 5
head test_data/part-0 | python3.6 test_client.py imdb_bow_client_conf/serving_client_conf.prototxt imdb.vocab > ${dir}client_log.txt 2>&1
head test_data/part-0 | ${py_version} test_client.py imdb_bow_client_conf/serving_client_conf.prototxt imdb.vocab > ${dir}client_log.txt 2>&1
check_result client "bow_CPU_RPC server test completed"
kill_server_process
}
......@@ -437,9 +491,9 @@ function lstm_rpc() {
data_dir=${data}imdb/
link_data ${data_dir}
sed -i 's/8866/8867/g' test_client.py
python3.6 -m paddle_serving_server.serve --model imdb_lstm_model/ --port 8867 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model imdb_lstm_model/ --port 8867 > ${dir}server_log.txt 2>&1 &
check_result server 5
head test_data/part-0 | python3.6 test_client.py imdb_lstm_client_conf/serving_client_conf.prototxt imdb.vocab > ${dir}client_log.txt 2>&1
head test_data/part-0 | ${py_version} test_client.py imdb_lstm_client_conf/serving_client_conf.prototxt imdb.vocab > ${dir}client_log.txt 2>&1
check_result client "lstm_CPU_RPC server test completed"
kill_server_process
}
......@@ -452,9 +506,9 @@ function lac_rpc() {
data_dir=${data}lac/
link_data ${data_dir}
sed -i 's/9292/8868/g' lac_client.py
python3.6 -m paddle_serving_server.serve --model lac_model/ --port 8868 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model lac_model/ --port 8868 > ${dir}server_log.txt 2>&1 &
check_result server 5
echo "我爱北京天安门" | python3.6 lac_client.py lac_client/serving_client_conf.prototxt lac_dict/ > ${dir}client_log.txt 2>&1
echo "我爱北京天安门" | ${py_version} lac_client.py lac_client/serving_client_conf.prototxt lac_dict/ > ${dir}client_log.txt 2>&1
check_result client "lac_CPU_RPC server test completed"
kill_server_process
}
......@@ -467,9 +521,9 @@ function fit_a_line_rpc() {
data_dir=${data}fit_a_line/
link_data ${data_dir}
sed -i 's/9393/8869/g' test_client.py
python3.6 -m paddle_serving_server.serve --model uci_housing_model --port 8869 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model uci_housing_model --port 8869 > ${dir}server_log.txt 2>&1 &
check_result server 5
python3.6 test_client.py uci_housing_client/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
${py_version} test_client.py uci_housing_client/serving_client_conf.prototxt > ${dir}client_log.txt 2>&1
check_result client "fit_a_line_CPU_RPC server test completed"
kill_server_process
}
......@@ -482,11 +536,11 @@ function faster_rcnn_model_rpc() {
data_dir=${data}detection/faster_rcnn_r50_fpn_1x_coco/
link_data ${data_dir}
sed -i 's/9494/8870/g' test_client.py
python3.6 -m paddle_serving_server.serve --model serving_server --port 8870 --gpu_ids 0 --thread 2 --use_trt > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model serving_server --port 8870 --gpu_ids 0 --thread 2 --use_trt > ${dir}server_log.txt 2>&1 &
echo "faster rcnn running ..."
nvidia-smi
check_result server 10
python3.6 test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
${py_version} test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "faster_rcnn_GPU_RPC server test completed"
kill_server_process
......@@ -500,10 +554,10 @@ function cascade_rcnn_rpc() {
data_dir=${data}cascade_rcnn/
link_data ${data_dir}
sed -i "s/9292/8879/g" test_client.py
python3.6 -m paddle_serving_server.serve --model serving_server --port 8879 --gpu_ids 0 --thread 2 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model serving_server --port 8879 --gpu_ids 0 --thread 2 > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
python3.6 test_client.py > ${dir}client_log.txt 2>&1
${py_version} test_client.py > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "cascade_rcnn_GPU_RPC server test completed"
kill_server_process
......@@ -517,10 +571,10 @@ function deeplabv3_rpc() {
data_dir=${data}deeplabv3/
link_data ${data_dir}
sed -i "s/9494/8880/g" deeplabv3_client.py
python3.6 -m paddle_serving_server.serve --model deeplabv3_server --gpu_ids 0 --port 8880 --thread 2 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model deeplabv3_server --gpu_ids 0 --port 8880 --thread 2 > ${dir}server_log.txt 2>&1 &
check_result server 10
nvidia-smi
python3.6 deeplabv3_client.py > ${dir}client_log.txt 2>&1
${py_version} deeplabv3_client.py > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "deeplabv3_GPU_RPC server test completed"
kill_server_process
......@@ -531,13 +585,13 @@ function mobilenet_rpc() {
check_dir ${dir}
unsetproxy
cd ${build_path}/python/examples/mobilenet
python3.6 -m paddle_serving_app.package --get_model mobilenet_v2_imagenet >/dev/null 2>&1
${py_version} -m paddle_serving_app.package --get_model mobilenet_v2_imagenet >/dev/null 2>&1
tar xf mobilenet_v2_imagenet.tar.gz
sed -i "s/9393/8881/g" mobilenet_tutorial.py
python3.6 -m paddle_serving_server.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 8881 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model mobilenet_v2_imagenet_model --gpu_ids 0 --port 8881 > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
python3.6 mobilenet_tutorial.py > ${dir}client_log.txt 2>&1
${py_version} mobilenet_tutorial.py > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "mobilenet_GPU_RPC server test completed"
kill_server_process
......@@ -551,10 +605,10 @@ function unet_rpc() {
data_dir=${data}unet_for_image_seg/
link_data ${data_dir}
sed -i "s/9494/8882/g" seg_client.py
python3.6 -m paddle_serving_server.serve --model unet_model --gpu_ids 0 --port 8882 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model unet_model --gpu_ids 0 --port 8882 > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
python3.6 seg_client.py > ${dir}client_log.txt 2>&1
${py_version} seg_client.py > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "unet_GPU_RPC server test completed"
kill_server_process
......@@ -568,10 +622,10 @@ function resnetv2_rpc() {
data_dir=${data}resnet_v2_50/
link_data ${data_dir}
sed -i 's/9393/8883/g' resnet50_v2_tutorial.py
python3.6 -m paddle_serving_server.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 8883 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model resnet_v2_50_imagenet_model --gpu_ids 0 --port 8883 > ${dir}server_log.txt 2>&1 &
check_result server 10
nvidia-smi
python3.6 resnet50_v2_tutorial.py > ${dir}client_log.txt 2>&1
${py_version} resnet50_v2_tutorial.py > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "resnetv2_GPU_RPC server test completed"
kill_server_process
......@@ -584,12 +638,12 @@ function ocr_rpc() {
cd ${build_path}/python/examples/ocr
data_dir=${data}ocr/
link_data ${data_dir}
python3.6 -m paddle_serving_app.package --get_model ocr_rec >/dev/null 2>&1
${py_version} -m paddle_serving_app.package --get_model ocr_rec >/dev/null 2>&1
tar xf ocr_rec.tar.gz
sed -i 's/9292/8884/g' test_ocr_rec_client.py
python3.6 -m paddle_serving_server.serve --model ocr_rec_model --port 8884 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model ocr_rec_model --port 8884 > ${dir}server_log.txt 2>&1 &
check_result server 5
python3.6 test_ocr_rec_client.py > ${dir}client_log.txt 2>&1
${py_version} test_ocr_rec_client.py > ${dir}client_log.txt 2>&1
check_result client "ocr_CPU_RPC server test completed"
kill_server_process
}
......@@ -602,9 +656,9 @@ function criteo_ctr_rpc_cpu() {
data_dir=${data}criteo_ctr/
link_data ${data_dir}
sed -i "s/9292/8885/g" test_client.py
python3.6 -m paddle_serving_server.serve --model ctr_serving_model/ --port 8885 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model ctr_serving_model/ --port 8885 > ${dir}server_log.txt 2>&1 &
check_result server 5
python3.6 test_client.py ctr_client_conf/serving_client_conf.prototxt raw_data/part-0 > ${dir}client_log.txt 2>&1
${py_version} test_client.py ctr_client_conf/serving_client_conf.prototxt raw_data/part-0 > ${dir}client_log.txt 2>&1
check_result client "criteo_ctr_CPU_RPC server test completed"
kill_server_process
}
......@@ -617,10 +671,10 @@ function criteo_ctr_rpc_gpu() {
data_dir=${data}criteo_ctr/
link_data ${data_dir}
sed -i "s/8885/8886/g" test_client.py
python3.6 -m paddle_serving_server.serve --model ctr_serving_model/ --port 8886 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model ctr_serving_model/ --port 8886 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
check_result server 8
nvidia-smi
python3.6 test_client.py ctr_client_conf/serving_client_conf.prototxt raw_data/part-0 > ${dir}client_log.txt 2>&1
${py_version} test_client.py ctr_client_conf/serving_client_conf.prototxt raw_data/part-0 > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "criteo_ctr_GPU_RPC server test completed"
kill_server_process
......@@ -634,10 +688,10 @@ function yolov4_rpc_gpu() {
data_dir=${data}yolov4/
link_data ${data_dir}
sed -i "s/9393/8887/g" test_client.py
python3.6 -m paddle_serving_server.serve --model yolov4_model --port 8887 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model yolov4_model --port 8887 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
nvidia-smi
check_result server 8
python3.6 test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
${py_version} test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "yolov4_GPU_RPC server test completed"
kill_server_process
......@@ -651,10 +705,10 @@ function senta_rpc_cpu() {
data_dir=${data}senta/
link_data ${data_dir}
sed -i "s/9393/8887/g" test_client.py
python3.6 -m paddle_serving_server.serve --model yolov4_model --port 8887 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model yolov4_model --port 8887 --gpu_ids 0 > ${dir}server_log.txt 2>&1 &
nvidia-smi
check_result server 8
python3.6 test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
${py_version} test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
nvidia-smi
check_result client "senta_GPU_RPC server test completed"
kill_server_process
......@@ -667,7 +721,7 @@ function fit_a_line_http() {
unsetproxy
cd ${build_path}/python/examples/fit_a_line
sed -i "s/9393/8871/g" test_server.py
python3.6 test_server.py > ${dir}server_log.txt 2>&1 &
${py_version} test_server.py > ${dir}server_log.txt 2>&1 &
check_result server 10
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]}], "fetch":["price"]}' http://127.0.0.1:8871/uci/prediction > ${dir}client_log.txt 2>&1
check_result client "fit_a_line_CPU_HTTP server test completed"
......@@ -679,7 +733,7 @@ function lac_http() {
check_dir ${dir}
unsetproxy
cd ${build_path}/python/examples/lac
python3.6 lac_web_service.py lac_model/ lac_workdir 8872 > ${dir}server_log.txt 2>&1 &
${py_version} lac_web_service.py lac_model/ lac_workdir 8872 > ${dir}server_log.txt 2>&1 &
check_result server 10
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "我爱北京天安门"}], "fetch":["word_seg"]}' http://127.0.0.1:8872/lac/prediction > ${dir}client_log.txt 2>&1
check_result client "lac_CPU_HTTP server test completed"
......@@ -691,7 +745,7 @@ function cnn_http() {
check_dir ${dir}
unsetproxy
cd ${build_path}/python/examples/imdb
python3.6 text_classify_service.py imdb_cnn_model/ workdir/ 8873 imdb.vocab > ${dir}server_log.txt 2>&1 &
${py_version} text_classify_service.py imdb_cnn_model/ workdir/ 8873 imdb.vocab > ${dir}server_log.txt 2>&1 &
check_result server 10
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "i am very sad | 0"}], "fetch":["prediction"]}' http://127.0.0.1:8873/imdb/prediction > ${dir}client_log.txt 2>&1
check_result client "cnn_CPU_HTTP server test completed"
......@@ -703,7 +757,7 @@ function bow_http() {
check_dir ${dir}
unsetproxy
cd ${build_path}/python/examples/imdb
python3.6 text_classify_service.py imdb_bow_model/ workdir/ 8874 imdb.vocab > ${dir}server_log.txt 2>&1 &
${py_version} text_classify_service.py imdb_bow_model/ workdir/ 8874 imdb.vocab > ${dir}server_log.txt 2>&1 &
check_result server 10
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "i am very sad | 0"}], "fetch":["prediction"]}' http://127.0.0.1:8874/imdb/prediction > ${dir}client_log.txt 2>&1
check_result client "bow_CPU_HTTP server test completed"
......@@ -715,7 +769,7 @@ function lstm_http() {
check_dir ${dir}
unsetproxy
cd ${build_path}/python/examples/imdb
python3.6 text_classify_service.py imdb_bow_model/ workdir/ 8875 imdb.vocab > ${dir}server_log.txt 2>&1 &
${py_version} text_classify_service.py imdb_bow_model/ workdir/ 8875 imdb.vocab > ${dir}server_log.txt 2>&1 &
check_result server 10
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "i am very sad | 0"}], "fetch":["prediction"]}' http://127.0.0.1:8875/imdb/prediction > ${dir}client_log.txt 2>&1
check_result client "lstm_CPU_HTTP server test completed"
......@@ -727,7 +781,7 @@ function ResNet50_http() {
check_dir ${dir}
unsetproxy
cd ${build_path}/python/examples/imagenet
python3.6 resnet50_web_service.py ResNet50_vd_model gpu 8876 > ${dir}server_log.txt 2>&1 &
${py_version} resnet50_web_service.py ResNet50_vd_model gpu 8876 > ${dir}server_log.txt 2>&1 &
check_result server 10
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"image": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"}], "fetch": ["score"]}' http://127.0.0.1:8876/image/prediction > ${dir}client_log.txt 2>&1
check_result client "ResNet50_GPU_HTTP server test completed"
......@@ -742,7 +796,7 @@ function bert_http() {
cp data-c.txt.1 data-c.txt
cp vocab.txt.1 vocab.txt
export CUDA_VISIBLE_DEVICES=0
python3.6 bert_web_service.py bert_seq128_model/ 8878 > ${dir}server_log.txt 2>&1 &
${py_version} bert_web_service.py bert_seq128_model/ 8878 > ${dir}server_log.txt 2>&1 &
check_result server 8
curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "hello"}], "fetch":["pooled_output"]}' http://127.0.0.1:8878/bert/prediction > ${dir}client_log.txt 2>&1
check_result client "bert_GPU_HTTP server test completed"
......@@ -756,19 +810,19 @@ function grpc_fit_a_line() {
cd ${build_path}/python/examples/grpc_impl_example/fit_a_line
data_dir=${data}fit_a_line/
link_data ${data_dir}
python3.6 test_server.py uci_housing_model/ > ${dir}server_log.txt 2>&1 &
${py_version} test_server.py uci_housing_model/ > ${dir}server_log.txt 2>&1 &
check_result server 5
echo "sync predict" > ${dir}client_log.txt 2>&1
python3.6 test_sync_client.py >> ${dir}client_log.txt 2>&1
${py_version} test_sync_client.py >> ${dir}client_log.txt 2>&1
check_result client "grpc_impl_example_fit_a_line_sync_CPU_gRPC server sync test completed"
echo "async predict" >> ${dir}client_log.txt 2>&1
python3.6 test_asyn_client.py >> ${dir}client_log.txt 2>&1
${py_version} test_asyn_client.py >> ${dir}client_log.txt 2>&1
check_result client "grpc_impl_example_fit_a_line_asyn_CPU_gRPC server asyn test completed"
echo "batch predict" >> ${dir}client_log.txt 2>&1
python3.6 test_batch_client.py >> ${dir}client_log.txt 2>&1
${py_version} test_batch_client.py >> ${dir}client_log.txt 2>&1
check_result client "grpc_impl_example_fit_a_line_batch_CPU_gRPC server batch test completed"
echo "timeout predict" >> ${dir}client_log.txt 2>&1
python3.6 test_timeout_client.py >> ${dir}client_log.txt 2>&1
${py_version} test_timeout_client.py >> ${dir}client_log.txt 2>&1
check_result client "grpc_impl_example_fit_a_line_timeout_CPU_gRPC server timeout test completed"
kill_server_process
}
......@@ -780,14 +834,38 @@ function grpc_yolov4() {
data_dir=${data}yolov4/
link_data ${data_dir}
echo -e "${GREEN_COLOR}grpc_impl_example_yolov4_GPU_gRPC server started${RES}"
python3.6 -m paddle_serving_server.serve --model yolov4_model --port 9393 --gpu_ids 0 --use_multilang > ${dir}server_log.txt 2>&1 &
${py_version} -m paddle_serving_server.serve --model yolov4_model --port 9393 --gpu_ids 0 --use_multilang > ${dir}server_log.txt 2>&1 &
check_result server 10
echo -e "${GREEN_COLOR}grpc_impl_example_yolov4_GPU_gRPC client started${RES}"
python3.6 test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
${py_version} test_client.py 000000570688.jpg > ${dir}client_log.txt 2>&1
check_result client "grpc_yolov4_GPU_GRPC server test completed"
kill_server_process
}
function ocr_c++_service() {
dir=${log_dir}rpc_model/ocr_c++_serving/
cd ${build_path}/python/examples/ocr
check_dir ${dir}
data_dir=${data}ocr/
link_data ${data_dir}
cp -r ocr_det_client/ ./ocr_det_client_cp
rm -rf ocr_det_client
mv ocr_det_client_cp ocr_det_client
sed -i "s/feed_type: 1/feed_type: 3/g" ocr_det_client/serving_client_conf.prototxt
sed -i "s/shape: 3/shape: 1/g" ocr_det_client/serving_client_conf.prototxt
sed -i '7,8d' ocr_det_client/serving_client_conf.prototxt
echo -e "${GREEN_COLOR}OCR_C++_Service_GPU_RPC server started${RES}"
$py_version -m paddle_serving_server.serve --model ocr_det_model ocr_rec_model --port 9293 --gpu_id 0 > ${dir}server_log.txt 2>&1 &
check_result server 8
echo -e "${GREEN_COLOR}OCR_C++_Service_GPU_RPC client started${RES}"
echo "------------------first:"
$py_version ocr_cpp_client.py ocr_det_client ocr_rec_client
echo "------------------second:"
$py_version ocr_cpp_client.py ocr_det_client ocr_rec_client > ${dir}client_log.txt 2>&1
check_result client "OCR_C++_Service_GPU_RPC server test completed"
kill_server_process
}
function build_all_whl() {
for whl in ${build_whl_list[@]}
do
......@@ -829,6 +907,7 @@ function end_hook() {
}
function main() {
set_env $1 $2
before_hook
build_all_whl
check
......@@ -848,4 +927,4 @@ function main() {
fi
}
main$@
main $@
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册