未验证 提交 776ea133 编写于 作者: T TeslaZhao 提交者: GitHub

Merge branch 'develop' into merge_branch

...@@ -77,7 +77,7 @@ service ImageClassifyService { ...@@ -77,7 +77,7 @@ service ImageClassifyService {
关于Serving端的配置的详细信息,可以参考[Serving端配置](SERVING_CONFIGURE.md) 关于Serving端的配置的详细信息,可以参考[Serving端配置](SERVING_CONFIGURE.md)
以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](../DESIGN.md)) 以下配置文件将ReaderOP, ClassifyOP和WriteJsonOP串联成一个workflow (关于OP/workflow等概念,可参考[设计文档](DESIGN.md))
- 配置文件示例: - 配置文件示例:
......
...@@ -45,11 +45,11 @@ Models that can be predicted using the Paddle Inference Library, models saved du ...@@ -45,11 +45,11 @@ Models that can be predicted using the Paddle Inference Library, models saved du
### 3.4 Server Inferface ### 3.4 Server Inferface
![Server Interface](server_interface.png) ![Server Interface](../server_interface.png)
### 3.5 Client Interface ### 3.5 Client Interface
<img src='client_inferface.png' width = "600" height = "200"> <img src='../client_inferface.png' width = "600" height = "200">
### 3.6 Client io used during Training ### 3.6 Client io used during Training
...@@ -66,7 +66,7 @@ def save_model(server_model_folder, ...@@ -66,7 +66,7 @@ def save_model(server_model_folder,
## 4. Paddle Serving Underlying Framework ## 4. Paddle Serving Underlying Framework
![Paddle-Serging Overall Architecture](framework.png) ![Paddle-Serging Overall Architecture](../framework.png)
**Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface **Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
**Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.) **Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
...@@ -102,18 +102,18 @@ class FluidFamilyCore { ...@@ -102,18 +102,18 @@ class FluidFamilyCore {
With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
![Infer Service](predict-service.png) ![Infer Service](../predict-service.png)
Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](./deprecated/CREATING.md) (simplified Chinese Version) Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](CREATING.md) (simplified Chinese Version)
Server instance perspective Server instance perspective
![Server instance perspective](server-side.png) ![Server instance perspective](../server-side.png)
#### 4.2.2 Paddle Serving Multi-Service Mechanism #### 4.2.2 Paddle Serving Multi-Service Mechanism
![Paddle Serving multi-service](multi-service.png) ![Paddle Serving multi-service](../multi-service.png)
Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
...@@ -121,12 +121,12 @@ Paddle Serving instances can load multiple models at the same time, and each mod ...@@ -121,12 +121,12 @@ Paddle Serving instances can load multiple models at the same time, and each mod
From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom. From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
![Call hierarchy relationship](multi-variants.png) ![Call hierarchy relationship](../multi-variants.png)
One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint: One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) section 3.2). The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) section 3.2).
![Client-side proxy function](client-side-proxy.png) ![Client-side proxy function](../client-side-proxy.png)
## 5. User Interface ## 5. User Interface
......
...@@ -47,11 +47,11 @@ PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学 ...@@ -47,11 +47,11 @@ PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学
### 3.4 Server Inferface ### 3.4 Server Inferface
![Server Interface](server_interface.png) ![Server Interface](../server_interface.png)
### 3.5 Client Interface ### 3.5 Client Interface
<img src='client_inferface.png' width = "600" height = "200"> <img src='../client_inferface.png' width = "600" height = "200">
### 3.6 训练过程中使用的Client io ### 3.6 训练过程中使用的Client io
...@@ -68,7 +68,7 @@ def save_model(server_model_folder, ...@@ -68,7 +68,7 @@ def save_model(server_model_folder,
## 4. Paddle Serving底层框架 ## 4. Paddle Serving底层框架
![Paddle-Serging总体框图](framework.png) ![Paddle-Serging总体框图](../framework.png)
**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口 **模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现) **业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
...@@ -104,31 +104,31 @@ class FluidFamilyCore { ...@@ -104,31 +104,31 @@ class FluidFamilyCore {
参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp 参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
![预测服务Service](predict-service.png) ![预测服务Service](../predict-service.png)
关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](https://github.com/PaddlePaddle/Serving/blob/develop/doc/deprecated/CREATING.md)的相关章节 关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](https://github.com/PaddlePaddle/Serving/blob/develop/doc/deprecated/CREATING.md)的相关章节
服务端实例透视图 服务端实例透视图
![服务端实例透视图](server-side.png) ![服务端实例透视图](../server-side.png)
#### 4.2.2 Paddle Serving的多服务机制 #### 4.2.2 Paddle Serving的多服务机制
![Paddle Serving的多服务机制](multi-service.png) ![Paddle Serving的多服务机制](../multi-service.png)
Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../..//tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
#### 4.2.3 业务调度层级关系 #### 4.2.3 业务调度层级关系
从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级 从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
![调用层级关系](multi-variants.png) ![调用层级关系](../multi-variants.png)
一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现: 一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](./deprecated/CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。 同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
![Client端proxy功能](client-side-proxy.png) ![Client端proxy功能](../client-side-proxy.png)
## 5. 用户接口 ## 5. 用户接口
...@@ -143,7 +143,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic ...@@ -143,7 +143,7 @@ Paddle Serving实例可以同时加载多个模型,每个模型用一个Servic
### 5.1 数据压缩方法 ### 5.1 数据压缩方法
Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](./deprecated/CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍) Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
### 5.2 C++ SDK API接口 ### 5.2 C++ SDK API接口
......
...@@ -10,7 +10,7 @@ Next, we will take the text classification task as an example to show model ense ...@@ -10,7 +10,7 @@ Next, we will take the text classification task as an example to show model ense
In this example (see the figure below), the server side predict the bow and CNN models with the same input in a service in parallel, The client side fetchs the prediction results of the two models, and processes the prediction results to get the final predict results. In this example (see the figure below), the server side predict the bow and CNN models with the same input in a service in parallel, The client side fetchs the prediction results of the two models, and processes the prediction results to get the final predict results.
![simple example](model_ensemble_example.png) ![simple example](../model_ensemble_example.png)
It should be noted that at present, only multiple models with the same format input and output in the same service are supported. In this example, the input and output formats of CNN and BOW model are the same. It should be noted that at present, only multiple models with the same format input and output in the same service are supported. In this example, the input and output formats of CNN and BOW model are the same.
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
该样例中(见下图),Server端在一项服务中并行预测相同输入的BOW和CNN模型,Client端获取两个模型的预测结果并进行后处理,得到最终的预测结果。 该样例中(见下图),Server端在一项服务中并行预测相同输入的BOW和CNN模型,Client端获取两个模型的预测结果并进行后处理,得到最终的预测结果。
![simple example](model_ensemble_example.png) ![simple example](../model_ensemble_example.png)
需要注意的是,目前只支持在同一个服务中使用多个相同格式输入输出的模型。在该例子中,CNN模型和BOW模型的输入输出格式是相同的。 需要注意的是,目前只支持在同一个服务中使用多个相同格式输入输出的模型。在该例子中,CNN模型和BOW模型的输入输出格式是相同的。
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
([简体中文](NEW_WEB_SERVICE_CN.md)|English) ([简体中文](NEW_WEB_SERVICE_CN.md)|English)
This document will take the image classification service based on the Imagenet data set as an example to introduce how to develop a new web service. The complete code can be visited at [here](../python/examples/imagenet/resnet50_web_service.py). This document will take the image classification service based on the Imagenet data set as an example to introduce how to develop a new web service. The complete code can be visited at [here](../../python/examples/imagenet/resnet50_web_service.py).
## WebService base class ## WebService base class
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
(简体中文|[English](NEW_WEB_SERVICE.md)) (简体中文|[English](NEW_WEB_SERVICE.md))
本文档将以Imagenet图像分类服务为例,来介绍如何开发一个新的Web Service。您可以在[这里](../python/examples/imagenet/resnet50_web_service.py)查阅完整的代码。 本文档将以Imagenet图像分类服务为例,来介绍如何开发一个新的Web Service。您可以在[这里](../../python/examples/imagenet/resnet50_web_service.py)查阅完整的代码。
## WebService基类 ## WebService基类
......
...@@ -26,7 +26,7 @@ def serving_encryption(): ...@@ -26,7 +26,7 @@ def serving_encryption():
serving_client="encrypt_client", serving_client="encrypt_client",
encryption=True) encryption=True)
``` ```
dirname is the folder path where the model is located. If the parameter is discrete, it is unnecessary to specify params_filename, else you need to set params_filename='__params__'. dirname is the folder path where the model is located. If the parameter is discrete, it is unnecessary to specify params_filename, else you need to set `params_filename="__params__"`.
The key is stored in the `key` file, and the encrypted model file and server-side configuration file are stored in the `encrypt_server` directory. The key is stored in the `key` file, and the encrypted model file and server-side configuration file are stored in the `encrypt_server` directory.
client-side configuration file are stored in the `encrypt_client` directory. client-side configuration file are stored in the `encrypt_client` directory.
......
...@@ -13,7 +13,7 @@ sh get_data.sh ...@@ -13,7 +13,7 @@ sh get_data.sh
## 模型加密 ## 模型加密
本示例中使用了`paddlepaddle`包中的模块,需要进行下载(`pip install paddlepaddle`)。 本示例中使用了`paddlepaddle`包中的模块,需要进行下载(`pip install paddlepaddle`)。
[python encrypt.py](./encrypt.py) 运行[python encrypt.py](./encrypt.py)进行模型加密
[//file]:#encrypt.py [//file]:#encrypt.py
``` python ``` python
...@@ -25,7 +25,9 @@ def serving_encryption(): ...@@ -25,7 +25,9 @@ def serving_encryption():
serving_client="encrypt_client", serving_client="encrypt_client",
encryption=True) encryption=True)
``` ```
其中dirname为模型所在的文件夹路径,当参数为离散参数时,无须指定params_filename,当参数为为__params__时,需指定params_filename='__params__'. 其中dirname为模型所在的文件夹路径
当参数为离散参数时,无须指定params_filename,当参数为__params__时,需指定`params_filename="__params__"`
密钥保存在`key`文件中,加密模型文件以及server端配置文件保存在`encrypt_server`目录下,client端配置文件保存在`encrypt_client`目录下。 密钥保存在`key`文件中,加密模型文件以及server端配置文件保存在`encrypt_server`目录下,client端配置文件保存在`encrypt_client`目录下。
......
...@@ -40,7 +40,7 @@ op: ...@@ -40,7 +40,7 @@ op:
fetch_list: ["concat_1.tmp_0"] fetch_list: ["concat_1.tmp_0"]
#计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡 #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
devices: "2" devices: "0"
rec: rec:
#并发数,is_thread_op=True时,为线程并发;否则为进程并发 #并发数,is_thread_op=True时,为线程并发;否则为进程并发
concurrency: 2 concurrency: 2
...@@ -64,4 +64,4 @@ op: ...@@ -64,4 +64,4 @@ op:
fetch_list: ["ctc_greedy_decoder_0.tmp_0", "softmax_0.tmp_0"] fetch_list: ["ctc_greedy_decoder_0.tmp_0", "softmax_0.tmp_0"]
#计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡 #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
devices: "2" devices: "0"
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册