未验证 提交 5b075407 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge branch 'develop' into auth

# Paddle Serving Design # Paddle Serving Design
([简体中文](./DESIGN_CN.md)|English) ([简体中文](./C++DESIGN_CN.md)|English)
## 1. Background ## 1. Background
...@@ -45,11 +45,11 @@ Models that can be predicted using the Paddle Inference Library, models saved du ...@@ -45,11 +45,11 @@ Models that can be predicted using the Paddle Inference Library, models saved du
### 3.4 Server Inferface ### 3.4 Server Inferface
![Server Interface](../server_interface.png) ![Server Interface](server_interface.png)
### 3.5 Client Interface ### 3.5 Client Interface
<img src='../client_inferface.png' width = "600" height = "200"> <img src='client_inferface.png' width = "600" height = "200">
### 3.6 Client io used during Training ### 3.6 Client io used during Training
...@@ -66,7 +66,7 @@ def save_model(server_model_folder, ...@@ -66,7 +66,7 @@ def save_model(server_model_folder,
## 4. Paddle Serving Underlying Framework ## 4. Paddle Serving Underlying Framework
![Paddle-Serging Overall Architecture](../framework.png) ![Paddle-Serging Overall Architecture](framework.png)
**Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface **Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
**Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.) **Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
...@@ -102,31 +102,31 @@ class FluidFamilyCore { ...@@ -102,31 +102,31 @@ class FluidFamilyCore {
With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
![Infer Service](../predict-service.png) ![Infer Service](predict-service.png)
Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](CREATING.md) (simplified Chinese Version) Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](CREATING.md) (simplified Chinese Version)
Server instance perspective Server instance perspective
![Server instance perspective](../server-side.png) ![Server instance perspective](server-side.png)
#### 4.2.2 Paddle Serving Multi-Service Mechanism #### 4.2.2 Paddle Serving Multi-Service Mechanism
![Paddle Serving multi-service](../multi-service.png) ![Paddle Serving multi-service](multi-service.png)
Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
#### 4.2.3 Hierarchical relationship of business scheduling #### 4.2.3 Hierarchical relationship of business scheduling
From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom. From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
![Call hierarchy relationship](../multi-variants.png) ![Call hierarchy relationship](multi-variants.png)
One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint: One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](../CLIENT_CONFIGURE.md) section 3.2). The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](CLIENT_CONFIGURE.md) section 3.2).
![Client-side proxy function](../client-side-proxy.png) ![Client-side proxy function](client-side-proxy.png)
## 5. User Interface ## 5. User Interface
...@@ -141,7 +141,7 @@ No matter how the communication protocol changes, the framework only needs to en ...@@ -141,7 +141,7 @@ No matter how the communication protocol changes, the framework only needs to en
### 5.1 Data Compression Method ### 5.1 Data Compression Method
Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](../CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type) Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
### 5.2 C ++ SDK API Interface ### 5.2 C ++ SDK API Interface
......
# Paddle Serving设计方案 # Paddle Serving设计方案
(简体中文|[English](./DESIGN.md)) (简体中文|[English](./C++DESIGN.md))
注意本页内容有已经过期,请查看:[设计文档](https://github.com/PaddlePaddle/Serving/blob/develop/doc/DESIGN_DOC_CN.md) 注意本页内容有已经过期,请查看:[设计文档](DESIGN_DOC_CN.md)
## 1. 项目背景 ## 1. 项目背景
...@@ -47,11 +47,11 @@ PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学 ...@@ -47,11 +47,11 @@ PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学
### 3.4 Server Inferface ### 3.4 Server Inferface
![Server Interface](../server_interface.png) ![Server Interface](server_interface.png)
### 3.5 Client Interface ### 3.5 Client Interface
<img src='../client_inferface.png' width = "600" height = "200"> <img src='client_inferface.png' width = "600" height = "200">
### 3.6 训练过程中使用的Client io ### 3.6 训练过程中使用的Client io
...@@ -68,7 +68,7 @@ def save_model(server_model_folder, ...@@ -68,7 +68,7 @@ def save_model(server_model_folder,
## 4. Paddle Serving底层框架 ## 4. Paddle Serving底层框架
![Paddle-Serging总体框图](../framework.png) ![Paddle-Serging总体框图](framework.png)
**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口 **模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现) **业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
...@@ -104,31 +104,31 @@ class FluidFamilyCore { ...@@ -104,31 +104,31 @@ class FluidFamilyCore {
参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp 参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
![预测服务Service](../predict-service.png) ![预测服务Service](predict-service.png)
关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](https://github.com/PaddlePaddle/Serving/blob/develop/doc/deprecated/CREATING.md)的相关章节 关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](CREATING.md)的相关章节
服务端实例透视图 服务端实例透视图
![服务端实例透视图](../server-side.png) ![服务端实例透视图](server-side.png)
#### 4.2.2 Paddle Serving的多服务机制 #### 4.2.2 Paddle Serving的多服务机制
![Paddle Serving的多服务机制](../multi-service.png) ![Paddle Serving的多服务机制](multi-service.png)
Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../..//tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
#### 4.2.3 业务调度层级关系 #### 4.2.3 业务调度层级关系
从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级 从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
![调用层级关系](../multi-variants.png) ![调用层级关系](multi-variants.png)
一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现: 一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](../CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。 同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
![Client端proxy功能](../client-side-proxy.png) ![Client端proxy功能](client-side-proxy.png)
## 5. 用户接口 ## 5. 用户接口
...@@ -143,7 +143,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic ...@@ -143,7 +143,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic
### 5.1 数据压缩方法 ### 5.1 数据压缩方法
Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](../CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍) Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
### 5.2 C++ SDK API接口 ### 5.2 C++ SDK API接口
......
...@@ -4,10 +4,10 @@ ...@@ -4,10 +4,10 @@
图像分类是根据图像的语义信息将不同类别图像区分开来,是计算机视觉中重要的基本问题,也是图像检测、图像分割、物体跟踪、行为分析等其他高层视觉任务的基础。图像分类在很多领域有广泛应用,包括安防领域的人脸识别和智能视频分析等,交通领域的交通场景识别,互联网领域基于内容的图像检索和相册自动归类,医学领域的图像识别等。 图像分类是根据图像的语义信息将不同类别图像区分开来,是计算机视觉中重要的基本问题,也是图像检测、图像分割、物体跟踪、行为分析等其他高层视觉任务的基础。图像分类在很多领域有广泛应用,包括安防领域的人脸识别和智能视频分析等,交通领域的交通场景识别,互联网领域基于内容的图像检索和相册自动归类,医学领域的图像识别等。
Paddle Serving已经提供了一个基于ResNet的模型预测服务,按照INSTALL.md中所述步骤,编译Paddle Serving,然后按GETTING_STARTED.md所述步骤启动client端和server端即可看到预测服务运行效果。
本文接下来以图像分类任务为例,介绍从零搭建一个模型预测服务的步骤。 本文接下来以图像分类任务为例,介绍从零搭建一个模型预测服务的步骤。
**本文件仅供参考,部分接口内容已经变动**
## 2. Serving端 ## 2. Serving端
...@@ -75,9 +75,9 @@ service ImageClassifyService { ...@@ -75,9 +75,9 @@ service ImageClassifyService {
#### 2.2.2 示例配置 #### 2.2.2 示例配置
关于Serving端的配置的详细信息,可以参考[Serving端配置](../SERVING_CONFIGURE.md) 关于Serving端的配置的详细信息,可以参考[Serving端配置](SERVING_CONFIGURE.md)
以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](DESIGN.md)) 以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](C++DESIGN_CN.md))
- 配置文件示例: - 配置文件示例:
...@@ -392,4 +392,4 @@ predictors { ...@@ -392,4 +392,4 @@ predictors {
} }
} }
``` ```
关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](../CLIENT_CONFIGURE.md) 关于客户端的详细配置选项,可参考[CLIENT CONFIGURATION](CLIENT_CONFIGURE.md)
...@@ -60,7 +60,7 @@ Docker is an open source application container engine that allows developers to ...@@ -60,7 +60,7 @@ Docker is an open source application container engine that allows developers to
Paddle Serving provides 4 development language client SDKs, including Python, C++, Java, and Golang. Golang SDK is under construction, We hope that interested open source developers can help submit PR. Paddle Serving provides 4 development language client SDKs, including Python, C++, Java, and Golang. Golang SDK is under construction, We hope that interested open source developers can help submit PR.
+ Python, Refer to the client example under python/examples or 4.2 web service example. + Python, Refer to the client example under python/examples or 4.2 web service example.
+ C++, Refer to《[从零开始写一个预测服务](deprecated/CREATING.md) + C++, Refer to《[从零开始写一个预测服务](CREATING.md)
+ Java, Refer to《[Paddle Serving Client Java SDK](JAVA_SDK.md) + Java, Refer to《[Paddle Serving Client Java SDK](JAVA_SDK.md)
+ Golang, Refer to《[How to use Go Client of Paddle Serving](deprecated/IMDB_GO_CLIENT.md) + Golang, Refer to《[How to use Go Client of Paddle Serving](deprecated/IMDB_GO_CLIENT.md)
......
...@@ -61,7 +61,7 @@ Docker 是一个开源的应用容器引擎,让开发者可以打包他们的 ...@@ -61,7 +61,7 @@ Docker 是一个开源的应用容器引擎,让开发者可以打包他们的
Paddle Serving提供了4种开发语言SDK,包括Python、C++、Java、Golang。Golang SDK在建设中,有兴趣的开源开发者可以提交PR。 Paddle Serving提供了4种开发语言SDK,包括Python、C++、Java、Golang。Golang SDK在建设中,有兴趣的开源开发者可以提交PR。
+ Python,参考python/examples下client示例 或 4.2 web服务示例 + Python,参考python/examples下client示例 或 4.2 web服务示例
+ C++,参考《[从零开始写一个预测服务](deprecated/CREATING.md) + C++,参考《[从零开始写一个预测服务](CREATING.md)
+ Java,参考《[Paddle Serving Client Java SDK](JAVA_SDK_CN.md) + Java,参考《[Paddle Serving Client Java SDK](JAVA_SDK_CN.md)
+ Golang,参考《[如何在Paddle Serving使用Go Client](deprecated/IMDB_GO_CLIENT_CN.md) + Golang,参考《[如何在Paddle Serving使用Go Client](deprecated/IMDB_GO_CLIENT_CN.md)
......
此差异已折叠。
此差异已折叠。
# Imagenet Pipeline WebService
This document will takes Imagenet service as an example to introduce how to use Pipeline WebService.
## Get model
```
sh get_model.sh
```
## Start server
```
python3 web_service.py &>log.txt &
```
## RPC test
```
python3 pipeline_rpc_client.py
```
# Imagenet Pipeline WebService
这里以 Imagenet 服务为例来介绍 Pipeline WebService 的使用。
## 获取模型
```
sh get_model.sh
```
## 启动服务
```
python3 web_service.py &>log.txt &
```
## 测试
```
python3 pipeline_rpc_client.py
```
...@@ -10,10 +10,10 @@ sh get_model.sh ...@@ -10,10 +10,10 @@ sh get_model.sh
## Start server ## Start server
``` ```
python resnet50_web_service.py &>log.txt & python3 resnet50_web_service.py &>log.txt &
``` ```
## RPC test ## RPC test
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
...@@ -10,11 +10,10 @@ sh get_model.sh ...@@ -10,11 +10,10 @@ sh get_model.sh
## 启动服务 ## 启动服务
``` ```
python resnet50_web_service.py &>log.txt & python3 resnet50_web_service.py &>log.txt &
``` ```
## 测试 ## 测试
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
...@@ -31,3 +31,11 @@ op: ...@@ -31,3 +31,11 @@ op:
#Fetch结果列表,以client_config中fetch_var的alias_name为准 #Fetch结果列表,以client_config中fetch_var的alias_name为准
fetch_list: ["score"] fetch_list: ["score"]
#precsion, 预测精度,降低预测精度可提升推理速度
#GPU 支持: "fp32"(default), "fp16", "int8";
#CPU 支持: "fp32"(default), "fp16", "bf16"(mkldnn); 不支持: "int8"
precision: "fp16"
#ir_optim开关
ir_optim: False
...@@ -8,12 +8,12 @@ sh get_data.sh ...@@ -8,12 +8,12 @@ sh get_data.sh
## Start servers ## Start servers
``` ```
python -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & python3 -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log &
python -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & python3 -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log &
python test_pipeline_server.py &>pipeline.log & python3 test_pipeline_server.py &>pipeline.log &
``` ```
## Start clients ## Start clients
``` ```
python test_pipeline_client.py python3 test_pipeline_client.py
``` ```
...@@ -8,12 +8,12 @@ sh get_data.sh ...@@ -8,12 +8,12 @@ sh get_data.sh
## 启动服务 ## 启动服务
``` ```
python -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & python3 -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log &
python -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & python3 -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log &
python test_pipeline_server.py &>pipeline.log & python3 test_pipeline_server.py &>pipeline.log &
``` ```
## 启动客户端 ## 启动客户端
``` ```
python test_pipeline_client.py python3 test_pipeline_client.py
``` ```
...@@ -4,11 +4,13 @@ ...@@ -4,11 +4,13 @@
This document will take OCR as an example to show how to use Pipeline WebService to start multi-model tandem services. This document will take OCR as an example to show how to use Pipeline WebService to start multi-model tandem services.
This OCR example only supports Process OP.
## Get Model ## Get Model
``` ```
python -m paddle_serving_app.package --get_model ocr_rec python3 -m paddle_serving_app.package --get_model ocr_rec
tar -xzvf ocr_rec.tar.gz tar -xzvf ocr_rec.tar.gz
python -m paddle_serving_app.package --get_model ocr_det python3 -m paddle_serving_app.package --get_model ocr_det
tar -xzvf ocr_det.tar.gz tar -xzvf ocr_det.tar.gz
``` ```
...@@ -18,14 +20,16 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/ocr/test_imgs.t ...@@ -18,14 +20,16 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/ocr/test_imgs.t
tar xf test_imgs.tar tar xf test_imgs.tar
``` ```
## Start Service ## Run services
### 1.Start a single server and client.
``` ```
python web_service.py &>log.txt & python3 web_service.py &>log.txt &
``` ```
## Test Test
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
<!-- <!--
...@@ -35,11 +39,22 @@ python pipeline_http_client.py ...@@ -35,11 +39,22 @@ python pipeline_http_client.py
### RPC ### RPC
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
### HTTP ### HTTP
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
--> -->
### 2.Run benchmark
```
python3 web_service.py &>log.txt &
```
Test
```
sh benchmark.sh
```
...@@ -3,12 +3,13 @@ ...@@ -3,12 +3,13 @@
([English](./README.md)|简体中文) ([English](./README.md)|简体中文)
本文档将以 OCR 为例,介绍如何使用 Pipeline WebService 启动多模型串联的服务。 本文档将以 OCR 为例,介绍如何使用 Pipeline WebService 启动多模型串联的服务。
本示例仅支持进程OP模式。
## 获取模型 ## 获取模型
``` ```
python -m paddle_serving_app.package --get_model ocr_rec python3 -m paddle_serving_app.package --get_model ocr_rec
tar -xzvf ocr_rec.tar.gz tar -xzvf ocr_rec.tar.gz
python -m paddle_serving_app.package --get_model ocr_det python3 -m paddle_serving_app.package --get_model ocr_det
tar -xzvf ocr_det.tar.gz tar -xzvf ocr_det.tar.gz
``` ```
...@@ -19,13 +20,15 @@ tar xf test_imgs.tar ...@@ -19,13 +20,15 @@ tar xf test_imgs.tar
``` ```
## 启动 WebService ## 启动 WebService
### 1.启动单server、单client
``` ```
python web_service.py &>log.txt & python3 web_service.py &>log.txt &
``` ```
## 测试 ## 测试
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
<!-- <!--
...@@ -36,12 +39,22 @@ python pipeline_http_client.py ...@@ -36,12 +39,22 @@ python pipeline_http_client.py
### RPC ### RPC
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
### HTTP ### HTTP
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
--> -->
### 2.启动 benchmark
```
python3 web_service.py &>log.txt &
```
Test
```
sh benchmark.sh
```
...@@ -30,4 +30,12 @@ op: ...@@ -30,4 +30,12 @@ op:
client_type: local_predictor client_type: local_predictor
#Fetch结果列表,以client_config中fetch_var的alias_name为准 #Fetch结果列表,以client_config中fetch_var的alias_name为准
fetch_list: ["price"] fetch_list: ["price"]
#precsion, 预测精度,降低预测精度可提升预测速度
#GPU 支持: "fp32"(default), "fp16", "int8";
#CPU 支持: "fp32"(default), "fp16", "bf16"(mkldnn); 不支持: "int8"
precision: "FP16"
#ir_optim开关
ir_optim: False
# -*- coding: utf-8 -*-
#
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
from __future__ import unicode_literals, absolute_import
import os
import sys
import time
import json
import requests
import numpy as np
from paddle_serving_client import Client
from paddle_serving_client.utils import MultiThreadRunner
from paddle_serving_client.utils import benchmark_args, show_latency
from paddle_serving_app.reader import Sequential, File2Image, Resize, CenterCrop
from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize
args = benchmark_args()
def single_func(idx, resource):
total_number = 0
profile_flags = False
latency_flags = False
if os.getenv("FLAGS_profile_client"):
profile_flags = True
if os.getenv("FLAGS_serving_latency"):
latency_flags = True
latency_list = []
if args.request == "rpc":
client = Client()
client.load_client_config(args.model)
client.connect([resource["endpoint"][idx % len(resource["endpoint"])]])
start = time.time()
for i in range(turns):
if args.batch_size >= 1:
l_start = time.time()
seq = Sequential([
File2Image(), Resize(256), CenterCrop(224), RGB2BGR(),
Transpose((2, 0, 1)), Div(255), Normalize(
[0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])
image_file = "daisy.jpg"
img = seq(image_file)
feed_data = np.array(img)
feed_data = np.expand_dims(feed_data, 0).repeat(
args.batch_size, axis=0)
result = client.predict(
feed={"image": feed_data},
fetch=["save_infer_model/scale_0.tmp_0"],
batch=True)
l_end = time.time()
if latency_flags:
latency_list.append(l_end * 1000 - l_start * 1000)
total_number = total_number + 1
else:
print("unsupport batch size {}".format(args.batch_size))
else:
raise ValueError("not implemented {} request".format(args.request))
end = time.time()
if latency_flags:
return [[end - start], latency_list, [total_number]]
else:
return [[end - start]]
if __name__ == '__main__':
multi_thread_runner = MultiThreadRunner()
endpoint_list = ["127.0.0.1:9393"]
turns = 1
start = time.time()
result = multi_thread_runner.run(
single_func, args.thread, {"endpoint": endpoint_list,
"turns": turns})
end = time.time()
total_cost = end - start
total_number = 0
avg_cost = 0
for i in range(args.thread):
avg_cost += result[0][i]
total_number += result[2][i]
avg_cost = avg_cost / args.thread
print("total cost-include init: {}s".format(total_cost))
print("each thread cost: {}s. ".format(avg_cost))
print("qps: {}samples/s".format(args.batch_size * total_number / (
avg_cost * args.thread)))
print("qps(request): {}samples/s".format(total_number / (avg_cost *
args.thread)))
print("total count: {} ".format(total_number))
if os.getenv("FLAGS_serving_latency"):
show_latency(result[1])
rm profile_log*
rm -rf resnet_log*
export CUDA_VISIBLE_DEVICES=0,1,2,3
export FLAGS_profile_server=1
export FLAGS_profile_client=1
export FLAGS_serving_latency=1
gpu_id=3
#save cpu and gpu utilization log
if [ -d utilization ];then
rm -rf utilization
else
mkdir utilization
fi
#start server
python3.6 -m paddle_serving_server.serve --model $1 --port 9393 --thread 10 --gpu_ids $gpu_id --use_trt --ir_optim > elog 2>&1 &
sleep 15
#warm up
python3.6 benchmark.py --thread 1 --batch_size 1 --model $2/serving_client_conf.prototxt --request rpc > profile 2>&1
echo -e "import psutil\nimport time\nwhile True:\n\tcpu_res = psutil.cpu_percent()\n\twith open('cpu.txt', 'a+') as f:\n\t\tf.write(f'{cpu_res}\\\n')\n\ttime.sleep(0.1)" > cpu.py
for thread_num in 1 2 4 8 16
do
for batch_size in 1 4 8 16 32
do
job_bt=`date '+%Y%m%d%H%M%S'`
nvidia-smi --id=$gpu_id --query-compute-apps=used_memory --format=csv -lms 100 > gpu_memory_use.log 2>&1 &
nvidia-smi --id=$gpu_id --query-gpu=utilization.gpu --format=csv -lms 100 > gpu_utilization.log 2>&1 &
rm -rf cpu.txt
python3.6 cpu.py &
gpu_memory_pid=$!
python3.6 benchmark.py --thread $thread_num --batch_size $batch_size --model $2/serving_client_conf.prototxt --request rpc > profile 2>&1
kill `ps -ef|grep used_memory|awk '{print $2}'` > /dev/null
kill `ps -ef|grep utilization.gpu|awk '{print $2}'` > /dev/null
kill `ps -ef|grep cpu.py|awk '{print $2}'` > /dev/null
echo "model_name:" $1
echo "thread_num:" $thread_num
echo "batch_size:" $batch_size
echo "=================Done===================="
echo "model_name:$1" >> profile_log_$1
echo "batch_size:$batch_size" >> profile_log_$1
job_et=`date '+%Y%m%d%H%M%S'`
awk 'BEGIN {max = 0} {if(NR>1){if ($1 > max) max=$1}} END {print "CPU_UTILIZATION:", max}' cpu.txt >> profile_log_$1
#awk 'BEGIN {max = 0} {if(NR>1){if ($1 > max) max=$1}} END {print "MAX_GPU_MEMORY:", max}' gpu_memory_use.log >> profile_log_$1
#awk 'BEGIN {max = 0} {if(NR>1){if ($1 > max) max=$1}} END {print "GPU_UTILIZATION:", max}' gpu_utilization.log >> profile_log_$1
grep -av '^0 %' gpu_utilization.log > gpu_utilization.log.tmp
awk 'BEGIN {max = 0} {if(NR>1){if ($1 > max) max=$1}} END {print "MAX_GPU_MEMORY:", max}' gpu_memory_use.log >> profile_log_$1
awk -F" " '{sum+=$1} END {print "GPU_UTILIZATION:", sum/NR, sum, NR }' gpu_utilization.log.tmp >> profile_log_$1
rm -rf gpu_memory_use.log gpu_utilization.log gpu_utilization.log.tmp
python3.6 ../util/show_profile.py profile $thread_num >> profile_log_$1
tail -n 10 profile >> profile_log_$1
echo "" >> profile_log_$1
done
done
#Divided log
awk 'BEGIN{RS="\n\n"}{i++}{print > "resnet_log_"i}' profile_log_$1
mkdir resnet_log && mv resnet_log_* resnet_log
ps -ef|grep 'serving'|grep -v grep|cut -c 9-15 | xargs kill -9
if [ ! -x "ResNet50.tar.gz"]; then
wget https://paddle-inference-dist.bj.bcebos.com/AI-Rank/models/Paddle/ResNet50.tar.gz
fi
tar -xzvf ResNet50.tar.gz
python3.6 -m paddle_serving_client.convert --dirname ./ResNet50 --model_filename model --params_filename params
bash benchmark.sh serving_server serving_client
...@@ -119,8 +119,11 @@ class LocalPredictor(object): ...@@ -119,8 +119,11 @@ class LocalPredictor(object):
self.fetch_names_to_type_[var.alias_name] = var.fetch_type self.fetch_names_to_type_[var.alias_name] = var.fetch_type
precision_type = paddle_infer.PrecisionType.Float32 precision_type = paddle_infer.PrecisionType.Float32
if precision.lower() in precision_map: if precision is not None and precision.lower() in precision_map:
precision_type = precision_map[precision.lower()] precision_type = precision_map[precision.lower()]
else:
logger.warning("precision error!!! Please check precision:{}".
format(precision))
if use_profile: if use_profile:
config.enable_profile() config.enable_profile()
if mem_optim: if mem_optim:
...@@ -156,8 +159,11 @@ class LocalPredictor(object): ...@@ -156,8 +159,11 @@ class LocalPredictor(object):
if not use_gpu and not use_lite: if not use_gpu and not use_lite:
if precision_type == paddle_infer.PrecisionType.Int8: if precision_type == paddle_infer.PrecisionType.Int8:
config.enable_quantizer() logger.warning(
if precision.lower() == "bf16": "PRECISION INT8 is not supported in CPU right now! Please use fp16 or bf16."
)
#config.enable_quantizer()
if precision is not None and precision.lower() == "bf16":
config.enable_mkldnn_bfloat16() config.enable_mkldnn_bfloat16()
self.predictor = paddle_infer.create_predictor(config) self.predictor = paddle_infer.create_predictor(config)
......
...@@ -44,7 +44,8 @@ class LocalServiceHandler(object): ...@@ -44,7 +44,8 @@ class LocalServiceHandler(object):
mem_optim=True, mem_optim=True,
ir_optim=False, ir_optim=False,
available_port_generator=None, available_port_generator=None,
use_profile=False): use_profile=False,
precision="fp32"):
""" """
Initialization of localservicehandler Initialization of localservicehandler
...@@ -62,6 +63,7 @@ class LocalServiceHandler(object): ...@@ -62,6 +63,7 @@ class LocalServiceHandler(object):
ir_optim: use calculation chart optimization, False default. ir_optim: use calculation chart optimization, False default.
available_port_generator: generate available ports available_port_generator: generate available ports
use_profile: use profiling, False default. use_profile: use profiling, False default.
precision: inference precesion, e.g. "fp32", "fp16", "int8"
Returns: Returns:
None None
...@@ -137,16 +139,17 @@ class LocalServiceHandler(object): ...@@ -137,16 +139,17 @@ class LocalServiceHandler(object):
self._server_pros = [] self._server_pros = []
self._use_profile = use_profile self._use_profile = use_profile
self._fetch_names = fetch_names self._fetch_names = fetch_names
self._precision = precision
_LOGGER.info( _LOGGER.info(
"Models({}) will be launched by device {}. use_gpu:{}, " "Models({}) will be launched by device {}. use_gpu:{}, "
"use_trt:{}, use_lite:{}, use_xpu:{}, device_type:{}, devices:{}, " "use_trt:{}, use_lite:{}, use_xpu:{}, device_type:{}, devices:{}, "
"mem_optim:{}, ir_optim:{}, use_profile:{}, thread_num:{}, " "mem_optim:{}, ir_optim:{}, use_profile:{}, thread_num:{}, "
"client_type:{}, fetch_names:{}".format( "client_type:{}, fetch_names:{} precision:{}".format(
model_config, self._device_name, self._use_gpu, self._use_trt, model_config, self._device_name, self._use_gpu, self._use_trt,
self._use_lite, self._use_xpu, device_type, self._devices, self._use_lite, self._use_xpu, device_type, self._devices, self.
self._mem_optim, self._ir_optim, self._use_profile, _mem_optim, self._ir_optim, self._use_profile, self._thread_num,
self._thread_num, self._client_type, self._fetch_names)) self._client_type, self._fetch_names, self._precision))
def get_fetch_list(self): def get_fetch_list(self):
return self._fetch_names return self._fetch_names
...@@ -197,14 +200,15 @@ class LocalServiceHandler(object): ...@@ -197,14 +200,15 @@ class LocalServiceHandler(object):
ir_optim=self._ir_optim, ir_optim=self._ir_optim,
use_trt=self._use_trt, use_trt=self._use_trt,
use_lite=self._use_lite, use_lite=self._use_lite,
use_xpu=self._use_xpu) use_xpu=self._use_xpu,
precision=self._precision)
return self._local_predictor_client return self._local_predictor_client
def get_client_config(self): def get_client_config(self):
return os.path.join(self._model_config, "serving_server_conf.prototxt") return os.path.join(self._model_config, "serving_server_conf.prototxt")
def _prepare_one_server(self, workdir, port, gpuid, thread_num, mem_optim, def _prepare_one_server(self, workdir, port, gpuid, thread_num, mem_optim,
ir_optim): ir_optim, precision):
""" """
According to self._device_name, generating one Cpu/Gpu/Arm Server, and According to self._device_name, generating one Cpu/Gpu/Arm Server, and
setting the model config amd startup params. setting the model config amd startup params.
...@@ -216,6 +220,7 @@ class LocalServiceHandler(object): ...@@ -216,6 +220,7 @@ class LocalServiceHandler(object):
thread_num: thread num thread_num: thread num
mem_optim: use memory/graphics memory optimization mem_optim: use memory/graphics memory optimization
ir_optim: use calculation chart optimization ir_optim: use calculation chart optimization
precision: inference precison, e.g."fp32", "fp16", "int8"
Returns: Returns:
server: CpuServer/GpuServer server: CpuServer/GpuServer
...@@ -256,6 +261,7 @@ class LocalServiceHandler(object): ...@@ -256,6 +261,7 @@ class LocalServiceHandler(object):
server.set_num_threads(thread_num) server.set_num_threads(thread_num)
server.set_memory_optimize(mem_optim) server.set_memory_optimize(mem_optim)
server.set_ir_optimize(ir_optim) server.set_ir_optimize(ir_optim)
server.set_precision(precision)
server.load_model_config(self._model_config) server.load_model_config(self._model_config)
server.prepare_server( server.prepare_server(
...@@ -292,7 +298,8 @@ class LocalServiceHandler(object): ...@@ -292,7 +298,8 @@ class LocalServiceHandler(object):
device_id, device_id,
thread_num=self._thread_num, thread_num=self._thread_num,
mem_optim=self._mem_optim, mem_optim=self._mem_optim,
ir_optim=self._ir_optim)) ir_optim=self._ir_optim,
precision=self._precision))
def start_server(self): def start_server(self):
""" """
......
...@@ -42,22 +42,28 @@ logger_config = { ...@@ -42,22 +42,28 @@ logger_config = {
}, },
"handlers": { "handlers": {
"f_pipeline.log": { "f_pipeline.log": {
"class": "logging.FileHandler", "class": "logging.handlers.RotatingFileHandler",
"level": "INFO", "level": "INFO",
"formatter": "normal_fmt", "formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log"), "filename": os.path.join(log_dir, "pipeline.log"),
"maxBytes": 512000000,
"backupCount": 20,
}, },
"f_pipeline.log.wf": { "f_pipeline.log.wf": {
"class": "logging.FileHandler", "class": "logging.handlers.RotatingFileHandler",
"level": "WARNING", "level": "WARNING",
"formatter": "normal_fmt", "formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log.wf"), "filename": os.path.join(log_dir, "pipeline.log.wf"),
"maxBytes": 512000000,
"backupCount": 10,
}, },
"f_tracer.log": { "f_tracer.log": {
"class": "logging.FileHandler", "class": "logging.handlers.RotatingFileHandler",
"level": "INFO", "level": "INFO",
"formatter": "tracer_fmt", "formatter": "tracer_fmt",
"filename": os.path.join(log_dir, "pipeline.tracer"), "filename": os.path.join(log_dir, "pipeline.tracer"),
"maxBytes": 512000000,
"backupCount": 5,
}, },
}, },
"loggers": { "loggers": {
......
...@@ -138,6 +138,7 @@ class Op(object): ...@@ -138,6 +138,7 @@ class Op(object):
self.devices = "" self.devices = ""
self.mem_optim = False self.mem_optim = False
self.ir_optim = False self.ir_optim = False
self.precision = "fp32"
if self._server_endpoints is None: if self._server_endpoints is None:
server_endpoints = conf.get("server_endpoints", []) server_endpoints = conf.get("server_endpoints", [])
if len(server_endpoints) != 0: if len(server_endpoints) != 0:
...@@ -159,6 +160,7 @@ class Op(object): ...@@ -159,6 +160,7 @@ class Op(object):
self.mem_optim = local_service_conf.get("mem_optim") self.mem_optim = local_service_conf.get("mem_optim")
self.ir_optim = local_service_conf.get("ir_optim") self.ir_optim = local_service_conf.get("ir_optim")
self._fetch_names = local_service_conf.get("fetch_list") self._fetch_names = local_service_conf.get("fetch_list")
self.precision = local_service_conf.get("precision")
if self.model_config is None: if self.model_config is None:
self.with_serving = False self.with_serving = False
else: else:
...@@ -173,7 +175,8 @@ class Op(object): ...@@ -173,7 +175,8 @@ class Op(object):
device_type=self.device_type, device_type=self.device_type,
devices=self.devices, devices=self.devices,
mem_optim=self.mem_optim, mem_optim=self.mem_optim,
ir_optim=self.ir_optim) ir_optim=self.ir_optim,
precision=self.precision)
service_handler.prepare_server() # get fetch_list service_handler.prepare_server() # get fetch_list
serivce_ports = service_handler.get_port_list() serivce_ports = service_handler.get_port_list()
self._server_endpoints = [ self._server_endpoints = [
...@@ -195,7 +198,8 @@ class Op(object): ...@@ -195,7 +198,8 @@ class Op(object):
devices=self.devices, devices=self.devices,
fetch_names=self._fetch_names, fetch_names=self._fetch_names,
mem_optim=self.mem_optim, mem_optim=self.mem_optim,
ir_optim=self.ir_optim) ir_optim=self.ir_optim,
precision=self.precision)
if self._client_config is None: if self._client_config is None:
self._client_config = service_handler.get_client_config( self._client_config = service_handler.get_client_config(
) )
...@@ -560,7 +564,7 @@ class Op(object): ...@@ -560,7 +564,7 @@ class Op(object):
self._get_output_channels(), False, trace_buffer, self._get_output_channels(), False, trace_buffer,
self.model_config, self.workdir, self.thread_num, self.model_config, self.workdir, self.thread_num,
self.device_type, self.devices, self.mem_optim, self.device_type, self.devices, self.mem_optim,
self.ir_optim)) self.ir_optim, self.precision))
p.daemon = True p.daemon = True
p.start() p.start()
process.append(p) process.append(p)
...@@ -594,7 +598,7 @@ class Op(object): ...@@ -594,7 +598,7 @@ class Op(object):
self._get_output_channels(), True, trace_buffer, self._get_output_channels(), True, trace_buffer,
self.model_config, self.workdir, self.thread_num, self.model_config, self.workdir, self.thread_num,
self.device_type, self.devices, self.mem_optim, self.device_type, self.devices, self.mem_optim,
self.ir_optim)) self.ir_optim, self.precision))
# When a process exits, it attempts to terminate # When a process exits, it attempts to terminate
# all of its daemonic child processes. # all of its daemonic child processes.
t.daemon = True t.daemon = True
...@@ -1064,7 +1068,7 @@ class Op(object): ...@@ -1064,7 +1068,7 @@ class Op(object):
def _run(self, concurrency_idx, input_channel, output_channels, def _run(self, concurrency_idx, input_channel, output_channels,
is_thread_op, trace_buffer, model_config, workdir, thread_num, is_thread_op, trace_buffer, model_config, workdir, thread_num,
device_type, devices, mem_optim, ir_optim): device_type, devices, mem_optim, ir_optim, precision):
""" """
_run() is the entry function of OP process / thread model.When client _run() is the entry function of OP process / thread model.When client
type is local_predictor in process mode, the CUDA environment needs to type is local_predictor in process mode, the CUDA environment needs to
...@@ -1085,7 +1089,8 @@ class Op(object): ...@@ -1085,7 +1089,8 @@ class Op(object):
device_type: support multiple devices device_type: support multiple devices
devices: gpu id list[gpu], "" default[cpu] devices: gpu id list[gpu], "" default[cpu]
mem_optim: use memory/graphics memory optimization, True default. mem_optim: use memory/graphics memory optimization, True default.
ir_optim: use calculation chart optimization, False default. ir_optim: use calculation chart optimization, False default.
precision: inference precision, e.g. "fp32", "fp16", "int8"
Returns: Returns:
None None
...@@ -1104,7 +1109,8 @@ class Op(object): ...@@ -1104,7 +1109,8 @@ class Op(object):
device_type=device_type, device_type=device_type,
devices=devices, devices=devices,
mem_optim=mem_optim, mem_optim=mem_optim,
ir_optim=ir_optim) ir_optim=ir_optim,
precision=precision)
_LOGGER.info("Init cuda env in process {}".format( _LOGGER.info("Init cuda env in process {}".format(
concurrency_idx)) concurrency_idx))
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册