未验证 提交 e8996239 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge branch 'develop' into readme2

...@@ -4,158 +4,93 @@ ...@@ -4,158 +4,93 @@
## 1. Design Objectives ## 1. Design Objectives
- Long Term Vision: Online deployment of deep learning models will be a user-facing application in the future. Any AI developer will face the problem of deploying an online service for his or her trained model. Paddle Serving is the official open source online deployment framework. The long term goal of Paddle Serving is to provide professional, reliable and easy-to-use online service to the last mile of AI application. Online deployment of deep learning models will be a user-facing application in the future. Any AI developer will face the problem of deploying an online service for his or her trained model.
Paddle Serving is the official open source online deployment framework. The long term goal of Paddle Serving is to provide professional, reliable and easy-to-use online service to the last mile of AI application.
- Easy-To-Use: For algorithmic developers to quickly deploy their models online, Paddle Serving designs APIs that can be used with Paddle's training process seamlessly, most Paddle models can be deployed as a service with one line command. - Industrial Oriented: To meet industrial deployment requirements, Paddle Serving supports lots of large-scale deployment functions: 1) Model management, model hot loading, model encryption and decryption. 2)Support cross-platform, multiple hardware deployment. 3) Distributed Sparse Embedding Indexing. 4) online A/B test
- High Performance: Thinking about improving the performance of model inference from the two dimensions of low latency and high throughput. 1) High-performance prediction engine Paddle Inference is integrated. 2) Nvidia Tensor RT is supported. 3) High-performance network framework brpc is Integrated. 4) Asynchronous Pipeline mode greatly improves throughput.
- Industrial Oriented: To meet industrial deployment requirements, Paddle Serving supports lots of large-scale deployment functions: 1) Distributed Sparse Embedding Indexing. 2) Highly concurrent underlying communications. 3) Model Management, online A/B test, model online loading. - Easy-To-Use: For algorithmic developers to quickly deploy their models online, Paddle Serving designs APIs that can be used with Paddle's training process seamlessly, most Paddle models can be deployed as a service with one line command. More than 20 common model cases and documents.
- Extensibility: Paddle Serving supports C++, Python and Golang client, and will support more clients with different languages. It is very easy to extend Paddle Serving to support other machine learning inference library, although currently Paddle inference library is the only official supported inference backend. - Extensibility: Paddle Serving supports C++, Python, Golang, Java four client SDK, and will support more clients with different languages. It is very easy to extend Paddle Serving to support other machine learning inference library, although currently Paddle inference library is the only official supported inference backend.
----
## 2. Module design and implementation ## 2. Preliminary Design
Any excellent software product must start from user needs, have clear positioning and good preliminary designs. Same goes for Paddle Serving, which aims to provide professional, reliable and easy-to-use online service to the last mile of AI application. By investigating the usage scenarios of a large number of users, and abstracting these scenarios, for example, online services focus on high concurrency and low response time; offline services focus on high batch throughput and high resource utilization; Algorithm developers are good at using Python for model training and inference.
### 2.1 Python API interface design ### 2.1 Design selection
#### 2.1.1 save a servable model In order to meet the needs of users in different scenarios, Paddle Serving's product positioning adopts lower-dimensional features, such as response time, throughput, development efficiency, etc., to achieve target selection and technology selection.
The inference phase of Paddle model focuses on 1) input variables of the model. 2) output variables of the model. 3) model structure and model parameters. Paddle Serving Python API provides a `save_model` interface for trained model, and save necessary information for Paddle Serving to use during deployment phase. An example is as follows:
``` python | Response time | throughput | development efficiency | Resource utilization | selection | Applications|
import paddle_serving_client.io as serving_io |-----|------|-----|-----|------|------|
serving_io.save_model("serving_model", "client_conf", | LOW | HIGH | LOW | HIGH |C++ Serving | High-performance,recall and ranking services of large-scale online recommendation systems|
{"words": data}, {"prediction": prediction}, | HIGH | HIGH | HIGH | HIGH |Python Pipeline Serving| High-throughput, high-efficiency, asynchronous mode, fitting for single operator multi-model combination scenarios|
fluid.default_main_program()) | HIGH | LOW | HIGH| LOW |Python webserver| High-throughput,Low-traffic services or projects that require rapid iteration, model effect verification|
```
In the example, `{"words": data}` and `{"prediction": prediction}` assign the inputs and outputs of a model. `"words"` and `"prediction"` are alias names of inputs and outputs. The design of alias name is to help developers to memorize model inputs and model outputs. `data` and `prediction` are Paddle `[Variable](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Variable_cn.html#variable)` in training phase that often represents ([Tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Tensor_cn.html#tensor)) or ([LodTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/lod_tensor.html#lodtensor)). When the `save_model` API is called, two directories called `"serving_model"` and `"client_conf"` will be generated. The content of the saved model is as follows:
``` shell
.
├── client_conf
│   ├── serving_client_conf.prototxt
│   └── serving_client_conf.stream.prototxt
└── serving_model
├── embedding_0.w_0
├── fc_0.b_0
├── fc_0.w_0
├── fc_1.b_0
├── fc_1.w_0
├── fc_2.b_0
├── fc_2.w_0
├── lstm_0.b_0
├── lstm_0.w_0
├── __model__
├── serving_server_conf.prototxt
└── serving_server_conf.stream.prototxt
```
`"serving_client_conf.prototxt"` and `"serving_server_conf.prototxt"` are the client side and the server side configurations of Paddle Serving, and `"serving_client_conf.stream.prototxt"` and `"serving_server_conf.stream.prototxt"` are the corresponding parts. Other contents saved in the directory are the same as Paddle saved inference model. We are considering to support `save_model` interface in Paddle training framework so that a user is not aware of the servable configurations.
#### 2.1.2 Model loading on the server side Performance index description:
1. Response time (ms): Average response time of a single request, calculate the response time of 50, 90, 95, 99 quantiles, the lower the better.
2. Throughput(QPS/TPS): The efficiency of service processing requests, the number of requests processed per unit time, the higher the better.
3. Development efficiency: Using different development languages ​​to complete the same work takes different time, including the efficiency of development, debugging, and maintenance, the higher the better.
4. Resource utilization: Deploy a service to resource utilization (CPU/GPU), low resource utilization is a waste of resources, the higher the better.
Prediction logics on the server side can be defined through Paddle Serving Server API with a few lines of code, an example is as follows: Paddle Serving provides RPC and HTTP protocol for users. For HTTP service, we recommend users with median or small traffic services to use, and the latency is not a strict requirement. For RPC protocol, we recommend high traffic services and low latency required services to use. For users who use distributed sparse parameter indexing built-in service, it is not necessary to care about the underlying details of communication. The following figure gives out several scenarios that user may want to use Paddle Serving.
``` python
import paddle_serving_server as serving
op_maker = serving.OpMaker()
read_op = op_maker.create('general_reader')
dist_kv_op = op_maker.create('general_dist_kv')
general_infer_op = op_maker.create('general_infer')
general_response_op = op_maker.create('general_response')
op_seq_maker = serving.OpSeqMaker()
op_seq_maker.add_op(read_op)
op_seq_maker.add_op(dist_kv_op)
op_seq_maker.add_op(general_infer_op)
op_seq_maker.add_op(general_response_op)
```
Current Paddle Serving supports operator list on the server side as follows:
<center>
| Op Name | Description |
|--------------|------|
| `general_reader` | General Data Reading Operator |
| `genreal_infer` | General Data Inference with Paddle Operator |
| `general_response` | General Data Response Operator |
| `general_dist_kv` | Distributed Sparse Embedding Indexing |
</center> <p align="center">
<br>
<img src='user_groups.png' width = "700" height = "470">
<br>
<p>
Paddle Serving supports inference engine on multiple devices. Current supports are CPU and GPU engine. Docker Images of CPU and GPU are provided officially. User can use one line command to start an inference service either on CPU or on GPU. For servable models saved from Paddle Serving IO API, users do not need to do extra coding work to startup a service, but may need some coding work on the client side. For development of Web Service plugin, a user needs to provide implementation of Web Service's preprocessing and postprocessing work if needed to get a HTTP service.
``` shell ### 2.2 Industrial Features
python -m paddle_serving_server.serve --model your_servable_model --thread 10 --port 9292
```
``` shell
python -m paddle_serving_server_gpu.serve --model your_servable_model --thread 10 --port 9292
```
Options of startup command are listed below: Paddle Serving takes into account a series of issues such as different operating systems, different development languages, multiple hardware devices, cross-deep learning platform model conversion, distributed sparse parameter indexing, and cloud deployment by different teams in industrial-level scenarios.
<center>
| Arguments | Types | Defaults | Descriptions | > Cross-platform operation
|--------------|------|-----------|--------------------------------|
| `thread` | int | `4` | Concurrency on server side, usually equal to the number of CPU core |
| `port` | int | `9292` | Port exposed to users |
| `name` | str | `""` | Service name that if a user specifies, the name of HTTP service is allocated |
| `model` | str | `""` | Servable models for Paddle Serving |
| `gpu_ids` | str | `""` | Supported only in paddle_serving_server_gpu, similar to the usage of CUDA_VISIBLE_DEVICES |
</center> Cross-platform is not dependent on the operating system, nor on the hardware environment. Applications developed under one operating system can still run under another operating system. Therefore, the design should consider not only the development language and the cross-platform components, but also the interpretation differences of the compilers on different systems.
For example, `python -m paddle_serving_server.serve --model your_servable_model --thread 10 --port 9292` is the same as the following code as user can define: Docker is an open source application container engine that allows developers to package their applications and dependencies into a portable container, and then publish it to any popular Linux machine or Windows machine. We have packaged a variety of Docker images for the Paddle Serving framework. Refer to the image list《[Docker Images](DOCKER_IMAGES.md)》, Select mirrors according to user's usage. We provide Docker usage documentation《[How to run PaddleServing in Docker](RUN_IN_DOCKER.md)》.Currently, the Python webserver mode can be deployed and run on the native Linux and Windows dual systems.《[Paddle Serving for Windows Users](WINDOWS_TUTORIAL.md)
``` python
from paddle_serving_server import OpMaker, OpSeqMaker, Server
op_maker = OpMaker()
read_op = op_maker.create('general_reader')
general_infer_op = op_maker.create('general_infer')
general_response_op = op_maker.create('general_response')
op_seq_maker = OpSeqMaker()
op_seq_maker.add_op(read_op)
op_seq_maker.add_op(general_infer_op)
op_seq_maker.add_op(general_response_op)
server = Server()
server.set_op_sequence(op_seq_maker.get_op_sequence())
server.set_num_threads(10)
server.load_model_config(your_servable_model)
server.prepare_server(port=9292, device="cpu")
server.run_server()
```
#### 2.1.3 Paddle Serving Client API > Support multiple development languages client ​​SDKs
Paddle Serving supports remote service access through RPC(remote procedure call) and HTTP. RPC access of remote service can be called through Client API of Paddle Serving. A user can define data preprocess function before calling Paddle Serving's client API. The example below explains how to define the input data of Paddle Serving Client. The servable model has two inputs with alias name of `sparse` and `dense`. `sparse` corresponds to sparse sequence ids such as `[1, 1001, 100001]` and `dense` corresponds to dense vector such as `[0.2, 0.5, 0.1, 0.4, 0.11, 0.22]`. For sparse sequence data, current design supports `lod_level=0` and `lod_level=1` of Paddle, that corresponds to `Tensor` and `LodTensor`. For dense vector, current design supports any `N-D Tensor`. Users do not need to assign the shape of inference model input. The Paddle Serving Client API will check the input data's shape with servable configurations.
``` python Paddle Serving provides 4 development language client SDKs, including Python, C++, Java, and Golang. Golang SDK is under construction, We hope that interested open source developers can help submit PR.
feed_dict["sparse"] = [1, 1001, 100001]
feed_dict["dense"] = [0.2, 0.5, 0.1, 0.4, 0.11, 0.22]
fetch_map = client.predict(feed=feed_dict, fetch=["prob"])
```
The following code sample shows that Paddle Serving Client API connects to Server API with endpoint of the servers. To use the data parallelism ability during prediction, Paddle Serving Client allows users to define multiple server endpoints. + Python, Refer to the client example under python/examples or 4.2 web service example.
``` python + C++, Refer to《[从零开始写一个预测服务](deprecated/CREATING.md)
client = Client() + Java, Refer to《[Paddle Serving Client Java SDK](JAVA_SDK.md)
client.load_client_config('servable_client_configs') + Golang, Refer to《[How to use Go Client of Paddle Serving](deprecated/IMDB_GO_CLIENT.md)
client.connect(["127.0.0.1:9292"])
```
### 2.2 Underlying Communication Mechanism > Support multiple hardware devices
Paddle Serving adopts [baidu-rpc](https://github.com/apache/incubator-brpc) as underlying communication layer. baidu-rpc is an open-source RPC communication library with high concurrency and low latency advantages compared with other open source RPC library. Millions of instances and thousands of services are using baidu-rpc within Baidu.
### 2.3 Core Execution Engine The inference framework of the well-known deep learning platform only supports CPU and GPU inference on the X86 platform. With the rapid increase in the complexity of AI algorithms, the computing power of chips has greatly increased, which has promoted the accelerated implementation of IoT applications and deployment on a variety of hardware.Paddle Serving integrates high-performance inference engine Paddle Inference and mobile terminal inference engine Paddle Lite, Provide inference services on multiple hardware devices. At present, in addition to X86 CPU and GPU, Paddle Serving has implemented the deployment of inference services on ARM CPU and Kunlun XPU. In the future, more hardware will be added to Paddle Serving.
The core execution engine of Paddle Serving is a Directed acyclic graph(DAG). In the DAG, each node represents a phase of inference service, such as paddle inference prediction, data preprocessing and data postprocessing. DAG can fully parallelize the computation efficiency and can fully utilize the computation resources. For example, when a user has input data that needs to be feed into two models, and combine the scores of the two models, the computation of model scoring is parallelized through DAG.
<p align="center"> > Model conversion across deep learning platforms
<br>
<img src='design_doc.png'">
<br>
<p>
### 2.4 Micro service plugin Models trained on other deep learning platforms can be passed《[PaddlePaddle/X2Paddle工具](https://github.com/PaddlePaddle/X2Paddle)》.We convert multiple mainstream CV models to Paddle models. TensorFlow, Caffe, ONNX, PyTorch model conversion is tested.《[An End-to-end Tutorial from Training to Inference Service Deployment](TRAIN_TO_SERVICE.md)
The underlying communication of Paddle Serving is implemented with C++ as well as the core framework, it is hard for users who do not familiar with C++ to implement new Paddle Serving Server Operators. Another approach is to use the light-weighted Web Service in Paddle Serving Server that can be viewed as a plugin. A user can implement complex data preprocessing and postprocessing logics to build a complex AI service. If access of the AI service has a large volumn, it is worth to implement the service with high performance Paddle Serving Server operators. The relationship between Web Service and RPC Service can be referenced in `User Type`.
## 3. Industrial Features Because it is impossible to directly view the feed and fetch parameter information in the model file, it is not convenient for users to assemble the parameters. Therefore, Paddle Serving developed a tool to convert the Paddle model into Serving format and generate a prototxt file containing feed and fetch parameter information. The following figure is the generated prototxt file of the uci_housing example. For more conversion methods, refer to the document《[How to save a servable model of Paddle Serving?](SAVE.md)》.
```
feed_var {
name: "x"
alias_name: "x"
is_lod_tensor: false
feed_type: 1
shape: 13
}
fetch_var {
name: "fc_0.tmp_1"
alias_name: "price"
is_lod_tensor: false
fetch_type: 1
shape: 1
}
```
### 3.1 Distributed Sparse Parameter Indexing > Distributed Sparse Parameter Indexing
Distributed Sparse Parameter Indexing is commonly seen in advertising and recommendation scenarios, and is often used coupled with distributed training. The figure below explains a commonly seen architecture for online recommendation. When the recommendation service receives a request from a user, the system will automatically collects training log for the offline distributed online training. Mean while, the request is sent to Paddle Serving Server. For sparse features, distributed sparse parameter index service is called so that sparse parameters can be looked up. The dense input features together with the looked up sparse model parameters are fed into the Paddle Inference Node of the DAG in Paddle Serving Server. Then the score can be responsed through RPC to product service for item ranking. Distributed Sparse Parameter Indexing is commonly seen in advertising and recommendation scenarios, and is often used coupled with distributed training. The figure below explains a commonly seen architecture for online recommendation. When the recommendation service receives a request from a user, the system will automatically collects training log for the offline distributed online training. Mean while, the request is sent to Paddle Serving Server. For sparse features, distributed sparse parameter index service is called so that sparse parameters can be looked up. The dense input features together with the looked up sparse model parameters are fed into the Paddle Inference Node of the DAG in Paddle Serving Server. Then the score can be responsed through RPC to product service for item ranking.
...@@ -164,41 +99,56 @@ Distributed Sparse Parameter Indexing is commonly seen in advertising and recomm ...@@ -164,41 +99,56 @@ Distributed Sparse Parameter Indexing is commonly seen in advertising and recomm
<img src='cube_eng.png' width = "450" height = "230"> <img src='cube_eng.png' width = "450" height = "230">
<br> <br>
<p> <p>
Why do we need to support distributed sparse parameter indexing in Paddle Serving? 1) In some recommendation scenarios, the number of features can be up to hundreds of billions that a single node can not hold the parameters within random access memory. 2) Paddle Serving supports distributed sparse parameter indexing that can couple with paddle inference. Users do not need to do extra work to have a low latency inference engine with hundreds of billions of parameters. Why do we need to support distributed sparse parameter indexing in Paddle Serving? 1) In some recommendation scenarios, the number of features can be up to hundreds of billions that a single node can not hold the parameters within random access memory. 2) Paddle Serving supports distributed sparse parameter indexing that can couple with paddle inference. Users do not need to do extra work to have a low latency inference engine with hundreds of billions of parameters.
### 3.2 Online A/B test ----
After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](ABTEST_IN_PADDLE_SERVING.md) for specific examples. ## 3. C++ Serving design
C++ Serving aims to achieve high-performance reasoning services with high concurrency and low latency. Its network framework and core execution engine are written based on C/C++, and provide powerful industrial-grade application capabilities, including model management, model security, and A/B Testing
### 3.1 Network Communication Mechanism
Paddle Serving adopts [brpc](https://github.com/apache/incubator-brpc) as underlying communication layer. brpc is an open-source RPC communication library with high concurrency and low latency advantages compared with other open source RPC library. Millions of instances and thousands of services are using brpc within Baidu.
### 3.2 Core Execution Engine
The core execution engine of Paddle Serving is a Directed acyclic graph(DAG). In the DAG, each node represents a phase of inference service, such as paddle inference prediction, data preprocessing and data postprocessing. DAG can fully parallelize the computation efficiency and can fully utilize the computation resources. For example, when a user has input data that needs to be feed into two models, and combine the scores of the two models, the computation of model scoring is parallelized through DAG.
<p align="center"> <p align="center">
<br> <br>
<img src='abtest.png' width = "345" height = "230"> <img src='design_doc.png'">
<br> <br>
<p> <p>
### 3.3 Model Management and Hot Reloading
C++ Serving supports model management functions, including management of multiple models and multiple model versions.In order to ensure the availability of services, the model needs to be hot loaded without service interruption. Paddle Serving supports this feature and provides a tool for monitoring output models to update local models. Please refer to [Hot loading in Paddle Serving](HOT_LOADING_IN_SERVING.md) for specific examples.
### 3.3 Model Online Reloading ### 3.4 MOEDL ENCRYPTION INFERENCE
Paddle Serving uses a symmetric encryption algorithm to encrypt the model, and decrypts it in memory during the service loading model. At present, providing basic model security capabilities does not guarantee absolute model security. Users can improve them according to our design to achieve a higher level of security. Documentation reference《[MOEDL ENCRYPTION INFERENCE](ENCRYPTION.md)
In order to ensure the availability of services, the model needs to be hot loaded without service interruption. Paddle Serving supports this feature and provides a tool for monitoring output models to update local models. Please refer to [Hot loading in Paddle Serving](HOT_LOADING_IN_SERVING.md) for specific examples. ### 3.5 A/B Test
### 3.4 Model Management After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](ABTEST_IN_PADDLE_SERVING.md) for specific examples.
Paddle Serving's C++ engine supports model management. Currently, python API is not released yet, please wait for the next release.
## 4. User Types
Paddle Serving provides RPC and HTTP protocol for users. For HTTP service, we recommend users with median or small traffic services to use, and the latency is not a strict requirement. For RPC protocol, we recommend high traffic services and low latency required services to use. For users who use distributed sparse parameter indexing built-in service, it is not necessary to care about the underlying details of communication. The following figure gives out several scenarios that user may want to use Paddle Serving.
<p align="center"> <p align="center">
<br> <br>
<img src='user_groups.png' width = "700" height = "470"> <img src='abtest.png' width = "345" height = "230">
<br> <br>
<p> <p>
For servable models saved from Paddle Serving IO API, users do not need to do extra coding work to startup a service, but may need some coding work on the client side. For development of Web Service plugin, a user needs to provide implementation of Web Service's preprocessing and postprocessing work if needed to get a HTTP service. ### 3.6 Micro service plugin
The underlying communication of Paddle Serving is implemented with C++ as well as the core framework, it is hard for users who do not familiar with C++ to implement new Paddle Serving Server Operators. Another approach is to use the light-weighted Web Service in Paddle Serving Server that can be viewed as a plugin. A user can implement complex data preprocessing and postprocessing logics to build a complex AI service. If access of the AI service has a large volumn, it is worth to implement the service with high performance Paddle Serving Server operators. The relationship between Web Service and RPC Service can be referenced in `User Type`.
### 4.1 Web Service Development ----
Web Service has lots of open sourced framework. Currently Paddle Serving uses Flask as built-in service framework, and users are not aware of this. More efficient web service will be integrated in the furture if needed. ## 4. Python Webserver Design
### 4.1 Network Communication Mechanism
There are many open source frameworks for web services. Paddle Serving currently integrates the Flask framework, but this part is not visible to users. In the future, a better-performing web framework may be provided as the underlying HTTP service integration engine.
### 4.2 Web Service Development
`WebService` is a Base Class, providing inheritable interfaces such `preprocess` and `postprocess` for users to implement. In the inherited class of `WebService` class, users can define any functions they want and the startup function interface is the same as RPC service.
``` python ``` python
from paddle_serving_server.web_service import WebService from paddle_serving_server.web_service import WebService
...@@ -229,15 +179,36 @@ imdb_service.prepare_dict({"dict_file_path": sys.argv[4]}) ...@@ -229,15 +179,36 @@ imdb_service.prepare_dict({"dict_file_path": sys.argv[4]})
imdb_service.run_server() imdb_service.run_server()
``` ```
`WebService` is a Base Class, providing inheritable interfaces such `preprocess` and `postprocess` for users to implement. In the inherited class of `WebService` class, users can define any functions they want and the startup function interface is the same as RPC service. ----
## 5. Python Pipeline Serving Design
The end-to-end deep learning model is currently unable to solve all problems. The use of multiple deep learning models together is still a conventional means to solve real-world problems.
the end-to-end deep learning model can not solve all the problems at present. Usually, it is necessary to use multiple deep learning models to solve practical problems.
### 5.1 Network Communication Mechanism
The network framework of Pipeline Serving uses gRPC and gPRC gateway. The gRPC service receives the RPC request, and the gPRC gateway receives the RESTful API request and forwards the request to the gRPC Service through the reverse proxy server. Therefore, the network layer of Pipeline Serving receives both RPC and RESTful API.
<center>
<img src='pipeline_serving-image1.png' height = "250" align="middle"/>
</center>
### 5.2 Core Design And Use Cases
The core design of Pipeline Serving is a graph execution engine, and the basic processing units are OP and Channel. A set of directed acyclic graphs can be realized through combination. Reference for design and use documents《[Pipeline Serving](PIPELINE_SERVING.md)
<center>
<img src='pipeline_serving-image2.png' height = "300" align="middle"/>
</center>
## 5. Future Plan ----
### 5.1 Open DAG definition API
Current version of Paddle Serving Server supports sequential type of execution flow. DAG definition API can be more helpful to users on complex tasks.
### 5.2 Auto Deployment on Cloud ## 6. Future Plan
### 5.1 Auto Deployment on Cloud
In order to make deployment more easily on public cloud, Paddle Serving considers to provides Operators on Kubernetes in submitting a service job. In order to make deployment more easily on public cloud, Paddle Serving considers to provides Operators on Kubernetes in submitting a service job.
### 5.3 Vector Indexing and Tree based Indexing ### 6.2 Vector Indexing and Tree based Indexing
In recommendation and advertisement systems, it is commonly seen to use vector based index or tree based indexing service to do candidate retrievals. These retrieval tasks will be built-in services of Paddle Serving. In recommendation and advertisement systems, it is commonly seen to use vector based index or tree based indexing service to do candidate retrievals. These retrieval tasks will be built-in services of Paddle Serving.
### 6.3 Service Monitoring
Paddle Serving will integrate Prometheus monitoring, which is a set of open source monitoring & alarm & time series database combination, suitable for k8s and docker monitoring systems.
...@@ -7,27 +7,27 @@ ...@@ -7,27 +7,27 @@
Paddle Serving是一个PaddlePaddle开源的在线服务框架,长期目标就是围绕着人工智能落地的最后一公里提供越来越专业、可靠、易用的服务。 Paddle Serving是一个PaddlePaddle开源的在线服务框架,长期目标就是围绕着人工智能落地的最后一公里提供越来越专业、可靠、易用的服务。
- 工业级:为了达到工业级深度学习模型在线部署的要求, - 工业级:为了达到工业级深度学习模型在线部署的要求,
Paddle Serving提供很多大规模场景需要的部署功能:1)模型管理、模型热加载、模型加解密。2)支持跨平台、多种硬件部署和推理。3)分布式稀疏参数索引功能。4)在线A/B流量测试 Paddle Serving提供很多大规模场景需要的部署功能:1)模型管理、模型热加载、模型加解密;2)支持跨平台、多种硬件部署;3)分布式稀疏参数索引功能;4)在线A/B流量测试
- 高性能:从低延时和高吞吐2个维度思考提升模型推理的性能。1)集成Paddle Inference高性能预测引擎;2)支持Nvidia Tensor RT高性能推理引擎;3)高性能网络框架;4)异步Pipeline模式大幅提升吞吐量 - 高性能:从低延时和高吞吐2个维度思考提升模型推理的性能。1)集成Paddle Inference高性能预测引擎;2)支持Nvidia Tensor RT高性能推理引擎;3)集成高性能网络框架brpc;4)异步Pipeline模式大幅提升吞吐量
- 简单易用:为了让使用Paddle的用户能够以极低的成本部署模型,PaddleServing设计了一套与Paddle训练框架无缝打通的预测部署API,普通模型可以使用一行命令进行服务部署。20多种常见模型案例和文档。 - 简单易用:为了让使用Paddle的用户能够以极低的成本部署模型,PaddleServing设计了一套与Paddle训练框架无缝打通的预测部署API,普通模型可以使用一行命令进行服务部署。20多种常见模型案例和文档。
- 功能扩展:当前,Paddle Serving支持C++、Python、Golang、Java 4种语言客户端,能力上也会持续加强。在Paddle Serving的框架设计方面,尽管当前Paddle Serving以支持Paddle模型的部署为核心功能, - 功能扩展:当前,Paddle Serving支持C++、Python、Golang、Java 4种语言客户端,未来会支持更多语。在Paddle Serving的框架设计方面,尽管当前Paddle Serving以支持Paddle模型的部署为核心功能,
用户可以很容易嵌入其他的机器学习库部署在线预测。 用户可以很容易嵌入其他的机器学习库部署在线预测。
---- ----
## 2. 整体设计 ## 2. 概要设计
任何优秀产品一定从用户需求出发,具有清晰的定位和良好的设计。Paddle Serving也不例外,Paddle Serving目标围绕着人工智能落地的最后一公里提供越来越专业、可靠、易用的服务。通过调研大量用户的使用场景,并将这些场景抽象归纳,例如在线服务侧重高并发,低平响;离线服务侧重批量高吞吐,高资源利用率;算法开发同学擅长使用Python做模型训练和推理等。 任何优秀软件产品一定从用户需求出发,具有清晰的定位和良好的概要设计。Paddle Serving也不例外,Paddle Serving目标围绕着人工智能落地的最后一公里提供越来越专业、可靠、易用的服务。通过调研大量用户的使用场景,并将这些场景抽象归纳,例如在线服务侧重高并发,低平响;离线服务侧重批量高吞吐,高资源利用率;算法开发者擅长使用Python做模型训练和推理等。
### 2.1 设计选型 ### 2.1 设计选型
为了满足不同场景的用户需求,Paddle Serving的产品定位采用更低维度特征,如响应时间、吞吐、开发效率等,实现目标的选型和技术选型。 为了满足不同场景的用户需求,Paddle Serving的产品定位采用更低维度特征,如响应时间、吞吐、开发效率等,实现目标的选型和技术选型。
| 响应时间 | 吞吐 | 开发效率 | 资源利用率 | 选型 | 类似场景| | 响应时间 | 吞吐 | 开发效率 | 资源利用率 | 选型 | 应用场景|
|-----|------|-----|-----|------|------| |-----|------|-----|-----|------|------|
| 低 | 高 | 低 | 高 |C++ Serving | 高性能场景,大型在线推荐系统召回、排序服务。支持批量推理| | 低 | 高 | 低 | 高 |C++ Serving | 高性能场景,大型在线推荐系统召回、排序服务|
| 高 | 高 | 较高 |高|Python Pipeline Serving| 兼顾吞吐和效率,单算子多模型组合场景,异步模式| | 高 | 高 | 较高 |高|Python Pipeline Serving| 兼顾吞吐和效率,单算子多模型组合场景,异步模式|
| 高 | 低 | 高| 低 |Python webserver| 高迭代效率场景,小型服务或需要快速迭代,模型效果验证| | 高 | 低 | 高| 低 |Python webserver| 高迭代效率场景,小型服务或需要快速迭代,模型效果验证|
...@@ -55,19 +55,19 @@ Paddle Serving从做顶层设计时考虑到不同团队在工业级场景中会 ...@@ -55,19 +55,19 @@ Paddle Serving从做顶层设计时考虑到不同团队在工业级场景中会
> 跨平台运行 > 跨平台运行
跨平台是不依赖于操作系统,也不依赖硬件环境。一个操作系统下开发的应用,放到另一个操作系统下依然可以运行。因此,设计上既要考虑开发语言、组件是跨平台的,同时也要考虑不同系统上编译器的解释差异。 跨平台是不依赖于操作系统,也不依赖硬件环境。一个操作系统下开发的应用,放到另一个操作系统下依然可以运行。因此,设计上既要考虑开发语言、组件是跨平台的,同时也要考虑不同系统上编译器的解释差异。
Docker 是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器或Windows机器上。我们将Paddle Serving框架打包了多种Docker镜像,镜像列表参考《[Docker镜像](DOCKER_IMAGES_CN.md)》,根据用户的使用场景选择。为方便用户使用Docker镜像,我们提供了帮助文档《[如何在Docker中运行PaddleServing](RUN_IN_DOCKER_CN.md)》。目前,Python webserver模式可在原生系统Linux和Windows双系统上部署运行。《[Windows平台使用Paddle Serving指导](WINDOWS_TUTORIAL_CN.md) Docker 是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器或Windows机器上。我们将Paddle Serving框架打包了多种Docker镜像,镜像列表参考《[Docker镜像](DOCKER_IMAGES_CN.md)》,根据用户的使用场景选择镜像。为方便用户使用Docker,我们提供了帮助文档《[如何在Docker中运行PaddleServing](RUN_IN_DOCKER_CN.md)》。目前,Python webserver模式可在原生系统Linux和Windows双系统上部署运行。《[Windows平台使用Paddle Serving指导](WINDOWS_TUTORIAL_CN.md)
> 支持多种开发语言SDK > 支持多种开发语言SDK
为了方便不同场景使用Serving,Paddle Serving提供了4种开发语言SDK,包括Python、C++、Java、Golang。Golang SDK在持续建设中,有兴趣的开源开发者可以提交PR。 Paddle Serving提供了4种开发语言SDK,包括Python、C++、Java、Golang。Golang SDK在建设中,有兴趣的开源开发者可以提交PR。
+ Python 参考python/examples下client示例 或 4.2 web服务示例 + Python参考python/examples下client示例 或 4.2 web服务示例
+ C++使用文档 [从零开始写一个预测服务](deprecated/CREATING.md) + C++,参考[从零开始写一个预测服务](deprecated/CREATING.md)
+ Java使用文档 [Paddle Serving Client Java SDK](JAVA_SDK_CN.md) + Java,参考[Paddle Serving Client Java SDK](JAVA_SDK_CN.md)
+ Golang示例文档 《[如何在Paddle Serving使用Go Client](IMDB_GO_CLIENT_CN.md) + Golang,参考《[如何在Paddle Serving使用Go Client](deprecated/IMDB_GO_CLIENT_CN.md)
> 支持多种硬件设备 > 支持多种硬件设备
主流深度学习平台的推理框架仅支持X86平台的CPU和GPU推理,随着AI算法复杂度高速增长,推动芯片算力不断提升,推动物联网应用加速落地,在多种硬件环境的推理场景越来越多。Paddle Serving集成高性能Paddle Inference和Paddle Lite,提供在多种硬件设备上推理服务。目前,除了X86 CPU、GPU外,Paddle Serving已实现ARM CPU和昆仑 XPU上部署推理服务,未来会有更多的硬件加入Paddle Serving。 知名的深度学习平台的推理框架仅支持X86平台的CPU和GPU推理。随着AI算法复杂度高速增长,芯片算力大幅提升,推动物联网应用加速落地,在多种硬件上部署。Paddle Serving集成高性能推理引擎Paddle Inference和移动端推理引擎Paddle Lite,在多种硬件设备上提供推理服务。目前,除了X86 CPU、GPU外,Paddle Serving已实现ARM CPU和昆仑 XPU上部署推理服务,未来会有更多的硬件加入Paddle Serving。
> 跨深度学习平台模型转换 > 跨深度学习平台模型转换
...@@ -105,15 +105,11 @@ fetch_var { ...@@ -105,15 +105,11 @@ fetch_var {
分布式稀疏参数索引通常在广告推荐中出现,并与分布式训练配合形成完整的离线-在线一体化部署。下图解释了其中的流程,产品的在线服务接受用户请求后将请求发送给预估服务,同时系统会记录用户的请求以进行相应的训练日志处理和拼接。离线分布式训练系统会针对流式产出的训练日志进行模型增量训练,而增量产生的模型会配送至分布式稀疏参数索引服务,同时对应的稠密的模型参数也会配送至在线的预估服务。在线服务由两部分组成,一部分是针对用户的请求提取特征后,将需要进行模型的稀疏参数索引的特征发送请求给分布式稀疏参数索引服务,针对分布式稀疏参数索引服务返回的稀疏参数再进行后续深度学习模型的计算流程,从而完成预估。 分布式稀疏参数索引通常在广告推荐中出现,并与分布式训练配合形成完整的离线-在线一体化部署。下图解释了其中的流程,产品的在线服务接受用户请求后将请求发送给预估服务,同时系统会记录用户的请求以进行相应的训练日志处理和拼接。离线分布式训练系统会针对流式产出的训练日志进行模型增量训练,而增量产生的模型会配送至分布式稀疏参数索引服务,同时对应的稠密的模型参数也会配送至在线的预估服务。在线服务由两部分组成,一部分是针对用户的请求提取特征后,将需要进行模型的稀疏参数索引的特征发送请求给分布式稀疏参数索引服务,针对分布式稀疏参数索引服务返回的稀疏参数再进行后续深度学习模型的计算流程,从而完成预估。
> 云上部署
云端部署能力正在建设中,待开放
---- ----
## 3. C++ Serving设计 ## 3. C++ Serving设计
C++ Serving目标实现高并发、低延时的高性能推理服务。其网络框架和核心执行引擎均是基于C/C++编写,并且提供一定的工业级应用能力,包括模型管理、模型安全、A/B Testing C++ Serving目标实现高并发、低延时的高性能推理服务。其网络框架和核心执行引擎均是基于C/C++编写,并且提供强大的工业级应用能力,包括模型管理、模型安全、A/B Testing
### 3.1 网络框架 ### 3.1 通信机制
C++ Serving采用[better-rpc](https://github.com/apache/incubator-brpc)进行底层的通信。better-rpc是百度开源的一款PRC通信库,具有高并发、低延时等特点,已经支持了包括百度在内上百万在线预估实例、上千个在线预估服务,稳定可靠。与gRPC网络框架相比,具有更低的延时,更高的并发性能;缺点是跨操作系统平台、跨语言能力不足。 C++ Serving采用[better-rpc](https://github.com/apache/incubator-brpc)进行底层的通信。better-rpc是百度开源的一款PRC通信库,具有高并发、低延时等特点,已经支持了包括百度在内上百万在线预估实例、上千个在线预估服务,稳定可靠。与gRPC网络框架相比,具有更低的延时,更高的并发性能;缺点是跨操作系统平台、跨语言能力不足。
...@@ -128,7 +124,7 @@ C++ Serving的核心执行引擎是一个有向无环图,图中的每个节点 ...@@ -128,7 +124,7 @@ C++ Serving的核心执行引擎是一个有向无环图,图中的每个节点
### 3.3 模型管理与热加载 ### 3.3 模型管理与热加载
addle Serving的C++引擎支持模型管理功能,支持多种模型和模型不同版本的管理。为了保证在模型更换期间推理服务的可用性,需要在服务不中断的情况下对模型进行热加载。Paddle Serving对该特性进行了支持,并提供了一个监控产出模型更新本地模型的工具,具体例子请参考《[Paddle Serving中的模型热加载](HOT_LOADING_IN_SERVING_CN.md)》。 Paddle Serving的C++引擎支持模型管理功能,支持多种模型和模型不同版本的管理。为了保证在模型更换期间推理服务的可用性,需要在服务不中断的情况下对模型进行热加载。Paddle Serving对该特性进行了支持,并提供了一个监控产出模型更新本地模型的工具,具体例子请参考《[Paddle Serving中的模型热加载](HOT_LOADING_IN_SERVING_CN.md)》。
### 3.4 模型加解密 ### 3.4 模型加解密
...@@ -210,3 +206,6 @@ Pipeline Serving核心设计是图执行引擎,基本处理单元是OP和Chann ...@@ -210,3 +206,6 @@ Pipeline Serving核心设计是图执行引擎,基本处理单元是OP和Chann
### 6.2 向量检索、树结构检索 ### 6.2 向量检索、树结构检索
在推荐与广告场景的召回系统中,通常需要采用基于向量的快速检索或者基于树结构的快速检索,Paddle Serving会对这方面的检索引擎进行集成或扩展。 在推荐与广告场景的召回系统中,通常需要采用基于向量的快速检索或者基于树结构的快速检索,Paddle Serving会对这方面的检索引擎进行集成或扩展。
### 6.3 服务监控
集成普罗米修斯监控,一套开源的监控&报警&时间序列数据库的组合,适合k8s和docker的监控系统。
...@@ -67,7 +67,7 @@ the client reads data from data-c.txt and send prediction request, the predictio ...@@ -67,7 +67,7 @@ the client reads data from data-c.txt and send prediction request, the predictio
### HTTP Inference Service ### HTTP Inference Service
start cpu HTTP inference service,Run start cpu HTTP inference service,Run
``` ```
python bert_web_service.py bert_seq128_model/ 9292 #launch gpu inference service python bert_web_service.py bert_seq128_model/ 9292 #launch cpu inference service
``` ```
Or,start gpu HTTP inference service,Run Or,start gpu HTTP inference service,Run
......
...@@ -67,7 +67,7 @@ head data-c.txt | python bert_client.py --model bert_seq128_client/serving_clien ...@@ -67,7 +67,7 @@ head data-c.txt | python bert_client.py --model bert_seq128_client/serving_clien
### 启动HTTP预测服务 ### 启动HTTP预测服务
启动cpu HTTP预测服务,执行 启动cpu HTTP预测服务,执行
``` ```
python bert_web_service.py bert_seq128_model/ 9292 #启动gpu预测服务 python bert_web_service.py bert_seq128_model/ 9292 #启动CPU预测服务
``` ```
......
...@@ -14,7 +14,7 @@ tar xf criteo_ctr_demo_model.tar.gz ...@@ -14,7 +14,7 @@ tar xf criteo_ctr_demo_model.tar.gz
mv models/ctr_client_conf . mv models/ctr_client_conf .
mv models/ctr_serving_model . mv models/ctr_serving_model .
``` ```
the directories like serving_server_model and serving_client_config will appear. the directories like `ctr_serving_model` and `ctr_client_conf` will appear.
### Start RPC Inference Service ### Start RPC Inference Service
......
...@@ -14,7 +14,7 @@ tar xf criteo_ctr_demo_model.tar.gz ...@@ -14,7 +14,7 @@ tar xf criteo_ctr_demo_model.tar.gz
mv models/ctr_client_conf . mv models/ctr_client_conf .
mv models/ctr_serving_model . mv models/ctr_serving_model .
``` ```
会在当前目录出现serving_server_model和serving_client_config文件夹。 会在当前目录出现`ctr_serving_model``ctr_client_conf`文件夹。
### 启动RPC预测服务 ### 启动RPC预测服务
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册