提交 3349a920 编写于 作者: M MRXLT

fix conflict

上级 94de7c4a
...@@ -18,19 +18,19 @@ ...@@ -18,19 +18,19 @@
<h2 align="center">Motivation</h2> <h2 align="center">Motivation</h2>
We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you can put the model online without much effort. A demo of serving is as follows: We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you are also capable to deploy the model online easily. A demo of Paddle Serving is as follows:
<p align="center"> <p align="center">
<img src="doc/demo.gif" width="700"> <img src="doc/demo.gif" width="700">
</p> </p>
<h2 align="center">Some Key Features</h2> <h2 align="center">Some Key Features</h2>
- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed **with one line command**. - Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc. - **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
- **Distributed Key-Value indexing** supported that is especially useful for large scale sparse features as model inputs. - **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
- **Highly concurrent and efficient communication** between clients and servers. - **Highly concurrent and efficient communication** between clients and servers supported.
- **Multiple programming languages** supported on client side, such as Golang, C++ and python - **Multiple programming languages** supported on client side, such as Golang, C++ and python.
- **Extensible framework design** that can support model serving beyond Paddle. - **Extensible framework design** which can support model serving beyond Paddle.
<h2 align="center">Installation</h2> <h2 align="center">Installation</h2>
...@@ -53,7 +53,7 @@ Paddle Serving provides HTTP and RPC based service for users to access ...@@ -53,7 +53,7 @@ Paddle Serving provides HTTP and RPC based service for users to access
### HTTP service ### HTTP service
Paddle Serving provides a built-in python module called `paddle_serving_server.serve` that can start a rpc service or a http service with one-line command. If we specify the argument `--name uci`, it means that we will have a HTTP service with a url of `$IP:$PORT/uci/prediction` Paddle Serving provides a built-in python module called `paddle_serving_server.serve` that can start a RPC service or a http service with one-line command. If we specify the argument `--name uci`, it means that we will have a HTTP service with a url of `$IP:$PORT/uci/prediction`
``` shell ``` shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci
``` ```
...@@ -75,7 +75,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.25 ...@@ -75,7 +75,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.25
### RPC service ### RPC service
A user can also start a rpc service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here. A user can also start a RPC service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here.
``` shell ``` shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
``` ```
...@@ -239,26 +239,26 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv ...@@ -239,26 +239,26 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
### New to Paddle Serving ### New to Paddle Serving
- [How to save a servable model?](doc/SAVE.md) - [How to save a servable model?](doc/SAVE.md)
- [An end-to-end tutorial from training to serving(Chinese)](doc/TRAIN_TO_SERVICE.md) - [An End-to-end tutorial from training to inference service deployment](doc/TRAIN_TO_SERVICE.md)
- [Write Bert-as-Service in 10 minutes(Chinese)](doc/BERT_10_MINS.md) - [Write Bert-as-Service in 10 minutes](doc/BERT_10_MINS.md)
### Developers ### Developers
- [How to config Serving native operators on server side?](doc/SERVER_DAG.md) - [How to config Serving native operators on server side?](doc/SERVER_DAG.md)
- [How to develop a new Serving operator](doc/NEW_OPERATOR.md) - [How to develop a new Serving operator?](doc/NEW_OPERATOR.md)
- [Golang client](doc/IMDB_GO_CLIENT.md) - [Golang client](doc/IMDB_GO_CLIENT.md)
- [Compile from source code(Chinese)](doc/COMPILE.md) - [Compile from source code](doc/COMPILE.md)
### About Efficiency ### About Efficiency
- [How profile serving efficiency?(Chinese)](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util) - [How to profile Paddle Serving latency?(Chinese)](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util)
- [Benchmarks](doc/BENCHMARK.md) - [CPU Benchmarks(Chinese)](doc/BENCHMARKING.md)
- [GPU Benchmarks(Chinese)](doc/GPU_BENCHMARKING.md)
### FAQ ### FAQ
- [FAQ(Chinese)](doc/FAQ.md) - [FAQ(Chinese)](doc/FAQ.md)
### Design ### Design
- [Design Doc(Chinese)](doc/DESIGN_DOC.md) - [Design Doc](doc/DESIGN_DOC.md)
- [Design Doc(English)](doc/DESIGN_DOC_EN.md)
<h2 align="center">Community</h2> <h2 align="center">Community</h2>
......
<img src='https://paddle-serving.bj.bcebos.com/imdb-demo%2FLogoMakr-3Bd2NM-300dpi.png' width = "600" height = "127"> <p align="center">
<br>
<img src='https://paddle-serving.bj.bcebos.com/imdb-demo%2FLogoMakr-3Bd2NM-300dpi.png' width = "600" height = "130">
<br>
<p>
<p align="center">
<br>
<a href="https://travis-ci.com/PaddlePaddle/Serving">
<img alt="Build Status" src="https://img.shields.io/travis/com/PaddlePaddle/Serving/develop">
</a>
<img alt="Release" src="https://img.shields.io/badge/Release-0.0.3-yellowgreen">
<img alt="Issues" src="https://img.shields.io/github/issues/PaddlePaddle/Serving">
<img alt="License" src="https://img.shields.io/github/license/PaddlePaddle/Serving">
<img alt="Slack" src="https://img.shields.io/badge/Join-Slack-green">
<br>
<p>
[![Build Status](https://img.shields.io/travis/com/PaddlePaddle/Serving/develop)](https://travis-ci.com/PaddlePaddle/Serving) <h2 align="center">动机</h2>
[![Release](https://img.shields.io/badge/Release-0.0.3-yellowgreen)](Release)
[![Issues](https://img.shields.io/github/issues/PaddlePaddle/Serving)](Issues) Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务。 **本项目目标**: 当用户使用 [Paddle](https://github.com/PaddlePaddle/Paddle) 训练了一个深度神经网络,就同时拥有了该模型的预测服务。
[![License](https://img.shields.io/github/license/PaddlePaddle/Serving)](LICENSE)
[![Slack](https://img.shields.io/badge/Join-Slack-green)](https://paddleserving.slack.com/archives/CU0PB4K35)
## 动机
Paddle Serving 帮助深度学习开发者轻易部署在线预测服务。 **本项目目标**: 只要你使用 [Paddle](https://github.com/PaddlePaddle/Paddle) 训练了一个深度神经网络,你就同时拥有了该模型的预测服务。
<p align="center"> <p align="center">
<img src="doc/demo.gif" width="700"> <img src="doc/demo.gif" width="700">
</p> </p>
## 核心功能 <h2 align="center">核心功能</h2>
- 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**. - 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**.
- 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等. - 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等.
- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入. - 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
...@@ -20,7 +33,7 @@ Paddle Serving 帮助深度学习开发者轻易部署在线预测服务。 ** ...@@ -20,7 +33,7 @@ Paddle Serving 帮助深度学习开发者轻易部署在线预测服务。 **
- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python. - 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
- **可伸缩框架设计** 可支持不限于Paddle的模型服务. - **可伸缩框架设计** 可支持不限于Paddle的模型服务.
## 安装 <h2 align="center">安装</h2>
强烈建议您在Docker内构建Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md) 强烈建议您在Docker内构建Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md)
...@@ -29,17 +42,51 @@ pip install paddle-serving-client ...@@ -29,17 +42,51 @@ pip install paddle-serving-client
pip install paddle-serving-server pip install paddle-serving-server
``` ```
## 快速启动示例 <h2 align="center">快速启动示例</h2>
<h3 align="center">波士顿房价预测</h3>
``` shell ``` shell
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/uci_housing.tar.gz wget --no-check-certificate https://paddle-serving.bj.bcebos.com/uci_housing.tar.gz
tar -xzf uci_housing.tar.gz tar -xzf uci_housing.tar.gz
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
``` ```
Python客户端请求 Paddle Serving 为用户提供了基于 HTTP 和 RPC 的服务
<h3 align="center">HTTP服务</h3>
Paddle Serving提供了一个名为`paddle_serving_server.serve`的内置python模块,可以使用单行命令启动RPC服务或HTTP服务。如果我们指定参数`--name uci`,则意味着我们将拥有一个HTTP服务,其URL为$IP:$PORT/uci/prediction`。
``` shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci
```
<center>
| Argument | Type | Default | Description |
|--------------|------|-----------|--------------------------------|
| `thread` | int | `4` | Concurrency of current service |
| `port` | int | `9292` | Exposed port of current service to users|
| `name` | str | `""` | Service name, can be used to generate HTTP request url |
| `model` | str | `""` | Path of paddle model directory to be served |
我们使用 `curl` 命令来发送HTTP POST请求给刚刚启动的服务。用户也可以调用python库来发送HTTP POST请求,请参考英文文档 [requests](https://requests.readthedocs.io/en/master/)。
</center>
``` shell
curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332], "fetch":["price"]}' http://127.0.0.1:9292/uci/prediction
```
<h3 align="center">RPC服务</h3>
用户还可以使用`paddle_serving_server.serve`启动RPC服务。 尽管用户需要基于Paddle Serving的python客户端API进行一些开发,但是RPC服务通常比HTTP服务更快。需要指出的是这里我们没有指定`--name`。
``` shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
```
``` python ``` python
# A user can visit rpc service through paddle_serving_client API
from paddle_serving_client import Client from paddle_serving_client import Client
client = Client() client = Client()
...@@ -51,24 +98,105 @@ fetch_map = client.predict(feed={"x": data}, fetch=["price"]) ...@@ -51,24 +98,105 @@ fetch_map = client.predict(feed={"x": data}, fetch=["price"])
print(fetch_map) print(fetch_map)
``` ```
在这里,`client.predict`函数具有两个参数。 `feed`是带有模型输入变量别名和值的`python dict`。 `fetch`被要从服务器返回的预测变量赋值。 在该示例中,在训练过程中保存可服务模型时,被赋值的tensor名为`"x"`和`"price"`。
<h2 align="center">Paddle Serving预装的服务</h2>
<h3 align="center">中文分词模型</h4>
- **介绍**:
``` shell
本示例为中文分词HTTP服务一键部署
```
- **下载服务包**:
``` shell
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/lac/lac_model_jieba_web.tar.gz
```
- **启动web服务**:
``` shell
tar -xzf lac_model_jieba_web.tar.gz
python lac_web_service.py jieba_server_model/ lac_workdir 9292
```
- **客户端请求示例**:
``` shell
curl -H "Content-Type:application/json" -X POST -d '{"words": "我爱北京天安门", "fetch":["word_seg"]}' http://127.0.0.1:9292/lac/prediction
```
- **返回结果示例**:
``` shell
{"word_seg":"我|爱|北京|天安门"}
```
<h3 align="center">图像分类模型</h4>
- **介绍**:
``` shell
图像分类模型由Imagenet数据集训练而成,该服务会返回一个标签及其概率
```
- **下载服务包**:
``` shell
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/imagenet-example/imagenet_demo.tar.gz
```
- **启动web服务**:
``` shell
tar -xzf imagenet_demo.tar.gz
python image_classification_service_demo.py resnet50_serving_model
```
- **客户端请求示例**:
<p align="center">
<br>
<img src='https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg' width = "200" height = "200">
<br>
<p>
``` shell
curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg", "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
```
- **返回结果示例**:
``` shell
{"label":"daisy","prob":0.9341403245925903}
```
<h2 align="center">文档</h2>
### 新手教程
- [怎样保存用于Paddle Serving的模型?](doc/SAVE_CN.md)
- [端到端完成从训练到部署全流程](doc/TRAIN_TO_SERVICE_CN.md)
- [十分钟构建Bert-As-Service](doc/BERT_10_MINS_CN.md)
### 开发者教程
- [如何配置Server端的计算图?](doc/SERVER_DAG_CN.md)
- [如何开发一个新的General Op?](doc/NEW_OPERATOR_CN.md)
- [如何在Paddle Serving使用Go Client?](doc/IMDB_GO_CLIENT_CN.md)
- [如何编译PaddleServing?](doc/COMPILE_CN.md)
### 关于Paddle Serving性能
- [如何测试Paddle Serving性能?](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util)
- [CPU版Benchmarks](doc/BENCHMARKING.md)
- [GPU版Benchmarks](doc/GPU_BENCHMARKING.md)
### FAQ
- [常见问答](doc/deprecated/FAQ.md)
## 文档 ### 设计文档
- [Paddle Serving设计文档](doc/DESIGN_DOC_CN.md)
[开发文档](doc/DESIGN.md) <h2 align="center">社区</h2>
[如何在服务器端配置本地Op?](doc/SERVER_DAG.md) ### Slack
[如何开发一个新的Op?](doc/NEW_OPERATOR.md) 想要同开发者和其他用户沟通吗?欢迎加入我们的 [Slack channel](https://paddleserving.slack.com/archives/CUBPKHKMJ)
[Golang 客户端](doc/IMDB_GO_CLIENT.md) ### 贡献代码
[从源码编译](doc/COMPILE.md) 如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/CONTRIBUTE.md)
[常见问答](doc/FAQ.md) ### 反馈
## 加入社区 如有任何反馈或是bug,请在 [GitHub Issue](https://github.com/PaddlePaddle/Serving/issues)提交
如果您想要联系其他用户和开发者,欢迎加入我们的 [Slack channel](https://paddleserving.slack.com/archives/CUBPKHKMJ)
## 如何贡献代码 ### License
如果您想要贡献代码给Paddle Serving,请参考[Contribution Guidelines](doc/CONTRIBUTE.md) [Apache 2.0 License](https://github.com/PaddlePaddle/Serving/blob/develop/LICENSE)
# ABTEST in Paddle Serving # ABTEST in Paddle Serving
([简体中文](./ABTEST_IN_PADDLE_SERVING_CN.md)|English)
This document will use an example of text classification task based on IMDB dataset to show how to build a A/B Test framework using Paddle Serving. The structure relationship between the client and servers in the example is shown in the figure below. This document will use an example of text classification task based on IMDB dataset to show how to build a A/B Test framework using Paddle Serving. The structure relationship between the client and servers in the example is shown in the figure below.
<img src="abtest.png" style="zoom:33%;" /> <img src="abtest.png" style="zoom:33%;" />
......
# 如何使用Paddle Serving做ABTEST # 如何使用Paddle Serving做ABTEST
(简体中文|[English](./ABTEST_IN_PADDLE_SERVING.md))
该文档将会用一个基于IMDB数据集的文本分类任务的例子,介绍如何使用Paddle Serving搭建A/B Test框架,例中的Client端、Server端结构如下图所示。 该文档将会用一个基于IMDB数据集的文本分类任务的例子,介绍如何使用Paddle Serving搭建A/B Test框架,例中的Client端、Server端结构如下图所示。
<img src="abtest.png" style="zoom:33%;" /> <img src="abtest.png" style="zoom:33%;" />
......
# Paddle Serving设计方案 # Paddle Serving Design
## 1. 项目背景 ([简体中文](./DESIGN_CN.md)|English)
PaddlePaddle是公司开源的机器学习框架,广泛支持各种深度学习模型的定制化开发; Paddle serving是Paddle的在线预测部分,与Paddle模型训练环节无缝衔接,提供机器学习预测云服务。本文将从模型、服务、接入等层面,自底向上描述Paddle Serving设计方案。 ## 1. Background
1. 模型是Paddle Serving预测的核心,包括模型数据和推理计算的管理; PaddlePaddle is the Baidu's open source machine learning framework, which supports a wide range of customized development of deep learning models; Paddle serving is the online prediction framework of Paddle, which seamlessly connects with Paddle model training, and provides cloud services for machine learning prediction. This article will describe the Paddle Serving design from the bottom up, from the model, service, and access levels.
2. 预测框架封装模型推理计算,对外提供RPC接口,对接不同上游;
3. 预测服务SDK提供一套接入框架
最终形成一套完整的serving解决方案。 1. The model is the core of Paddle Serving prediction, including the management of model data and inference calculations;
2. Prediction framework encapsulation model for inference calculations, providing external RPC interface to connect different upstream
3. The prediction service SDK provides a set of access frameworks
## 2. 名词解释 The result is a complete serving solution.
- baidu-rpc 百度官方开源RPC框架,支持多种常见通信协议,提供基于protobuf的自定义接口体验 ## 2. Terms explanation
- Variant Paddle Serving架构对一个最小预测集群的抽象,其特点是内部所有实例(副本)完全同质,逻辑上对应一个model的一个固定版本
- Endpoint 多个Variant组成一个Endpoint,逻辑上看,Endpoint代表一个model,Endpoint内部的Variant代表不同的版本
- OP PaddlePaddle用来封装一种数值计算的算子,Paddle Serving用来表示一种基础的业务操作算子,核心接口是inference。OP通过配置其依赖的上游OP,将多个OP串联成一个workflow
- Channel 一个OP所有请求级中间数据的抽象;OP之间通过Channel进行数据交互
- Bus 对一个线程中所有channel的管理,以及根据DAG之间的DAG依赖图对OP和Channel两个集合间的访问关系进行调度
- Stage Workflow按照DAG描述的拓扑图中,属于同一个环节且可并行执行的OP集合
- Node 由某个Op算子类结合参数配置组成的Op算子实例,也是Workflow中的一个执行单元
- Workflow 按照DAG描述的拓扑,有序执行每个OP的inference接口
- DAG/Workflow 由若干个相互依赖的Node组成,每个Node均可通过特定接口获得Request对象,节点Op通过依赖关系获得其前置Op的输出对象,最后一个Node的输出默认就是Response对象
- Service 对一次pv的请求封装,可配置若干条Workflow,彼此之间复用当前PV的Request对象,然后各自并行/串行执行,最后将Response写入对应的输出slot中;一个Paddle-serving进程可配置多套Service接口,上游根据ServiceName决定当前访问的Service接口。
## 3. Python Interface设计 - **baidu-rpc**: Baidu's official open source RPC framework, supports multiple common communication protocols, and provides a custom interface experience based on protobuf
- **Variant**: Paddle Serving architecture is an abstraction of a minimal prediction cluster, which is characterized by all internal instances (replicas) being completely homogeneous and logically corresponding to a fixed version of a model
- **Endpoint**: Multiple Variants form an Endpoint. Logically, Endpoint represents a model, and Variants within the Endpoint represent different versions.
- **OP**: PaddlePaddle is used to encapsulate a numerical calculation operator, Paddle Serving is used to represent a basic business operation operator, and the core interface is inference. OP configures its dependent upstream OP to connect multiple OPs into a workflow
- **Channel**: An abstraction of all request-level intermediate data of the OP; data exchange between OPs through Channels
- **Bus**: manages all channels in a thread, and schedules the access relationship between the two sets of OP and Channel according to the DAG dependency graph between DAGs
- **Stage**: Workflow according to the topology diagram described by DAG, a collection of OPs that belong to the same link and can be executed in parallel
- **Node**: An OP operator instance composed of an OP operator class combined with parameter configuration, which is also an execution unit in Workflow
- **Workflow**: executes the inference interface of each OP in order according to the topology described by DAG
- **DAG/Workflow**: consists of several interdependent Nodes. Each Node can obtain the Request object through a specific interface. The node Op obtains the output object of its pre-op through the dependency relationship. The output of the last Node is the Response object by default.
- **Service**: encapsulates a pv request, can configure several Workflows, reuse the current PV's Request object with each other, and then execute each in parallel/serial execution, and finally write the Response to the corresponding output slot; a Paddle-serving process Multiple sets of Service interfaces can be configured. The upstream determines the Service interface currently accessed based on the ServiceName.
### 3.1 核心目标: ## 3. Python Interface Design
一套Paddle Serving的动态库,支持Paddle保存的通用模型的远程预估服务,通过Python Interface调用PaddleServing底层的各种功能。 ### 3.1 Core Targets:
### 3.2 通用模型: A set of Paddle Serving dynamic library, support the remote estimation service of the common model saved by Paddle, and call the various underlying functions of PaddleServing through the Python Interface.
能够使用Paddle Inference Library进行预测的模型,在训练过程中保存的模型,包含Feed Variable和Fetch Variable ### 3.2 General Model:
### 3.3 整体设计: Models that can be predicted using the Paddle Inference Library, models saved during training, including Feed Variable and Fetch Variable
用户通过Python Client启动Client和Server,Python API有检查互联和待访问模型是否匹配的功能 ### 3.3 Overall design:
Python API背后调用的是Paddle Serving实现的client和server对应功能的pybind,互传的信息通过RPC实现
Client Python API当前有两个简单的功能,load_inference_conf和predict,分别用来执行加载待预测的模型和预测 - The user starts the Client and Server through the Python Client. The Python API has a function to check whether the interconnection and the models to be accessed match.
Server Python API主要负责加载预估模型,以及生成Paddle Serving需要的各种配置,包括engines,workflow,resource等 - The Python API calls the pybind corresponding to the client and server functions implemented by Paddle Serving, and the information transmitted through RPC is implemented through RPC.
- The Client Python API currently has two simple functions, load_inference_conf and predict, which are used to perform loading of the model to be predicted and prediction, respectively.
- The Server Python API is mainly responsible for loading the inference model and generating various configurations required by Paddle Serving, including engines, workflow, resources, etc.
### 3.4 Server Inferface ### 3.4 Server Inferface
...@@ -49,10 +51,10 @@ Server Python API主要负责加载预估模型,以及生成Paddle Serving需 ...@@ -49,10 +51,10 @@ Server Python API主要负责加载预估模型,以及生成Paddle Serving需
<img src='client_inferface.png' width = "600" height = "200"> <img src='client_inferface.png' width = "600" height = "200">
### 3.6 训练过程中使用的Client io ### 3.6 Client io used during Training
PaddleServing设计可以在训练过程中使用的保存模型接口,与Paddle保存inference model的接口基本一致,feed_var_dict与fetch_var_dict PaddleServing is designed to saves the model interface that can be used during the training process, which is basically the same as the Paddle save inference model interface, feed_var_dict and fetch_var_dict
可以为输入和输出变量起别名,serving启动需要读取的配置会保存在client端和server端的保存目录中。 You can alias the input and output variables. The configuration that needs to be read when the serving starts is saved in the client and server storage directories.
``` python ``` python
def save_model(server_model_folder, def save_model(server_model_folder,
...@@ -62,29 +64,29 @@ def save_model(server_model_folder, ...@@ -62,29 +64,29 @@ def save_model(server_model_folder,
main_program=None) main_program=None)
``` ```
## 4. Paddle Serving底层框架 ## 4. Paddle Serving Underlying Framework
![Paddle-Serging总体框图](framework.png) ![Paddle-Serging Overall Architecture](framework.png)
**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口 **Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现) **Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
**PredictService**:对外部提供的预测服务接口封装。通过protobuf定义与客户端的通信字段。 **Predict Service**: Encapsulation of the externally provided prediction service interface. Define communication fields with the client through protobuf.
### 4.1 模型管理框架 ### 4.1 Model Management Framework
模型管理框架负责管理机器学习框架训练出来的模型,总体可抽象成模型加载、模型数据和模型推理等3个层次。 The model management framework is responsible for managing the models trained by the machine learning framework. It can be abstracted into three levels: model loading, model data, and model reasoning.
#### 模型加载 #### Model Loading
将模型从磁盘加载到内存,支持多版本、热加载、增量更新等功能 Load model from disk to memory, support multi-version, hot-load, incremental update, etc.
#### 模型数据 #### Model data
模型在内存中的数据结构,集成fluid预测lib Model data structure in memory, integrated fluid inference lib
#### inferencer #### inferencer
向上为预测服务提供统一的预测接口 it provided united inference interface for upper layers
```C++ ```C++
class FluidFamilyCore { class FluidFamilyCore {
...@@ -94,54 +96,54 @@ class FluidFamilyCore { ...@@ -94,54 +96,54 @@ class FluidFamilyCore {
}; };
``` ```
### 4.2 业务调度框架 ### 4.2 Business Scheduling Framework
#### 4.2.1 预测服务Service #### 4.2.1 Inference Service
参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
![预测服务Service](predict-service.png) ![Infer Service](predict-service.png)
关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](CREATING.md)的相关章节 Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](./deprecated/CREATING.md) (simplified Chinese Version)
服务端实例透视图 Server instance perspective
![服务端实例透视图](server-side.png) ![Server instance perspective](server-side.png)
#### 4.2.2 Paddle Serving的多服务机制 #### 4.2.2 Paddle Serving Multi-Service Mechanism
![Paddle Serving的多服务机制](multi-service.png) ![Paddle Serving multi-service](multi-service.png)
Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
#### 4.2.3 业务调度层级关系 #### 4.2.3 Hierarchical relationship of business scheduling
从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级 From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
![调用层级关系](multi-variants.png) ![Call hierarchy relationship](multi-variants.png)
一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现: One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。 The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) section 3.2).
![Client端proxy功能](client-side-proxy.png) ![Client-side proxy function](client-side-proxy.png)
## 5. 用户接口 ## 5. User Interface
在满足一定的接口规范前提下,服务框架不对用户数据字段做任何约束,以应对各种预测服务的不同业务接口。Baidu-rpc继承了Protobuf serice的接口,用户按照Protobuf语法规范描述Request和Response业务接口。Paddle Serving基于Baidu-rpc框架搭建,默认支持该特性。 Under the premise of meeting certain interface specifications, the service framework does not make any restrictions on user data fields to meet different business interfaces of various forecast services. Baidu-rpc inherits the interface of Protobuf serice, and the user describes the Request and Response business interfaces according to the Protobuf syntax specification. Paddle Serving is built on the Baidu-rpc framework and supports this feature by default.
无论通信协议如何变化,框架只需确保Client和Server间通信协议和业务数据两种信息的格式同步,即可保证正常通信。这些信息又可细分如下: No matter how the communication protocol changes, the framework only needs to ensure that the communication protocol between the client and server and the format of the business data are synchronized to ensure normal communication. This information can be broken down as follows:
- 协议:Server和Client之间事先约定的、确保相互识别数据格式的包头信息。Paddle Serving用Protobuf作为基础通信格式 -Protocol: Header information agreed in advance between Server and Client to ensure mutual recognition of data format. Paddle Serving uses Protobuf as the basic communication format
- 数据:用来描述Request和Response的接口,例如待预测样本数据,和预测返回的打分。包括: -Data: Used to describe the interface of Request and Response, such as the sample data to be predicted, and the score returned by the prediction. include:
- 数据字段:请求包Request和返回包Response两种数据结构包含的字段定义    -Data fields: Field definitions included in the two data structures of Request and Return.
- 描述接口:跟协议接口类似,默认支持Protobuf    -Description interface: similar to the protocol interface, it supports Protobuf by default
### 5.1 数据压缩方法 ### 5.1 Data Compression Method
Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍) Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
### 5.2 C++ SDK API接口 ### 5.2 C ++ SDK API Interface
```C++ ```C++
class PredictorApi { class PredictorApi {
...@@ -176,7 +178,7 @@ class Predictor { ...@@ -176,7 +178,7 @@ class Predictor {
``` ```
### 5.3 OP相关接口 ### 5.3 Inferfaces related to Op
```C++ ```C++
class Op { class Op {
...@@ -258,7 +260,7 @@ class Op { ...@@ -258,7 +260,7 @@ class Op {
``` ```
### 5.4 框架相关接口 ### 5.4 Interfaces related to framework
Service Service
......
# Paddle Serving设计方案
(简体中文|[English](./DESIGN.md))
## 1. 项目背景
PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学习模型的定制化开发; Paddle Serving是Paddle的在线预测部分,与Paddle模型训练环节无缝衔接,提供机器学习预测云服务。本文将从模型、服务、接入等层面,自底向上描述Paddle Serving设计方案。
1. 模型是Paddle Serving预测的核心,包括模型数据和推理计算的管理;
2. 预测框架封装模型推理计算,对外提供RPC接口,对接不同上游;
3. 预测服务SDK提供一套接入框架
最终形成一套完整的serving解决方案。
## 2. 名词解释
- **baidu-rpc**: 百度官方开源RPC框架,支持多种常见通信协议,提供基于protobuf的自定义接口体验
- **Variant**: Paddle Serving架构对一个最小预测集群的抽象,其特点是内部所有实例(副本)完全同质,逻辑上对应一个model的一个固定版本
- **Endpoint**: 多个Variant组成一个Endpoint,逻辑上看,Endpoint代表一个model,Endpoint内部的Variant代表不同的版本
- **OP**: PaddlePaddle用来封装一种数值计算的算子,Paddle Serving用来表示一种基础的业务操作算子,核心接口是inference。OP通过配置其依赖的上游OP,将多个OP串联成一个workflow
- **Channel**: 一个OP所有请求级中间数据的抽象;OP之间通过Channel进行数据交互
- **Bus**: 对一个线程中所有channel的管理,以及根据DAG之间的DAG依赖图对OP和Channel两个集合间的访问关系进行调度
- **Stage**: Workflow按照DAG描述的拓扑图中,属于同一个环节且可并行执行的OP集合
- **Node**: 由某个OP算子类结合参数配置组成的OP算子实例,也是Workflow中的一个执行单元
- **Workflow**: 按照DAG描述的拓扑,有序执行每个OP的inference接口
- **DAG/Workflow**: 由若干个相互依赖的Node组成,每个Node均可通过特定接口获得Request对象,节点OP通过依赖关系获得其前置OP的输出对象,最后一个Node的输出默认就是Response对象
- **Service**: 对一次PV的请求封装,可配置若干条Workflow,彼此之间复用当前PV的Request对象,然后各自并行/串行执行,最后将Response写入对应的输出slot中;一个Paddle-serving进程可配置多套Service接口,上游根据ServiceName决定当前访问的Service接口。
## 3. Python Interface设计
### 3.1 核心目标:
完成一整套Paddle Serving的动态库,支持Paddle保存的通用模型的远程预估服务,通过Python Interface调用PaddleServing底层的各种功能。
### 3.2 通用模型:
能够使用Paddle Inference Library进行预测的模型,在训练过程中保存的模型,包含Feed Variable和Fetch Variable
### 3.3 整体设计:
- 用户通过Python Client启动Client和Server,Python API有检查互联和待访问模型是否匹配的功能
- Python API背后调用的是Paddle Serving实现的client和server对应功能的pybind,互传的信息通过RPC实现
- Client Python API当前有两个简单的功能,load_inference_conf和predict,分别用来执行加载待预测的模型和预测
- Server Python API主要负责加载预估模型,以及生成Paddle Serving需要的各种配置,包括engines,workflow,resource等
### 3.4 Server Inferface
![Server Interface](server_interface.png)
### 3.5 Client Interface
<img src='client_inferface.png' width = "600" height = "200">
### 3.6 训练过程中使用的Client io
PaddleServing设计可以在训练过程中使用的保存模型接口,与Paddle保存inference model的接口基本一致,feed_var_dict与fetch_var_dict
可以为输入和输出变量起别名,serving启动需要读取的配置会保存在client端和server端的保存目录中。
``` python
def save_model(server_model_folder,
client_config_folder,
feed_var_dict,
fetch_var_dict,
main_program=None)
```
## 4. Paddle Serving底层框架
![Paddle-Serging总体框图](framework.png)
**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
**PredictService**:对外部提供的预测服务接口封装。通过protobuf定义与客户端的通信字段。
### 4.1 模型管理框架
模型管理框架负责管理机器学习框架训练出来的模型,总体可抽象成模型加载、模型数据和模型推理等3个层次。
#### 模型加载
将模型从磁盘加载到内存,支持多版本、热加载、增量更新等功能
#### 模型数据
模型在内存中的数据结构,集成fluid预测lib
#### inferencer
向上为预测服务提供统一的预测接口
```C++
class FluidFamilyCore {
virtual bool Run(const void* in_data, void* out_data);
virtual int create(const std::string& data_path);
virtual int clone(void* origin_core);
};
```
### 4.2 业务调度框架
#### 4.2.1 预测服务Service
参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
![预测服务Service](predict-service.png)
关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](CREATING.md)的相关章节
服务端实例透视图
![服务端实例透视图](server-side.png)
#### 4.2.2 Paddle Serving的多服务机制
![Paddle Serving的多服务机制](multi-service.png)
Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
#### 4.2.3 业务调度层级关系
从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
![调用层级关系](multi-variants.png)
一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](./deprecated/CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
![Client端proxy功能](client-side-proxy.png)
## 5. 用户接口
在满足一定的接口规范前提下,服务框架不对用户数据字段做任何约束,以应对各种预测服务的不同业务接口。Baidu-rpc继承了Protobuf serice的接口,用户按照Protobuf语法规范描述Request和Response业务接口。Paddle Serving基于Baidu-rpc框架搭建,默认支持该特性。
无论通信协议如何变化,框架只需确保Client和Server间通信协议和业务数据两种信息的格式同步,即可保证正常通信。这些信息又可细分如下:
- 协议:Server和Client之间事先约定的、确保相互识别数据格式的包头信息。Paddle Serving用Protobuf作为基础通信格式
- 数据:用来描述Request和Response的接口,例如待预测样本数据,和预测返回的打分。包括:
- 数据字段:请求包Request和返回包Response两种数据结构包含的字段定义
- 描述接口:跟协议接口类似,默认支持Protobuf
### 5.1 数据压缩方法
Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](./deprecated/CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
### 5.2 C++ SDK API接口
```C++
class PredictorApi {
public:
int create(const char* path, const char* file);
int thrd_initialize();
int thrd_clear();
int thrd_finalize();
void destroy();
Predictor* fetch_predictor(std::string ep_name);
int free_predictor(Predictor* predictor);
};
class Predictor {
public:
// synchronize interface
virtual int inference(google::protobuf::Message* req,
google::protobuf::Message* res) = 0;
// asynchronize interface
virtual int inference(google::protobuf::Message* req,
google::protobuf::Message* res,
DoneType done,
brpc::CallId* cid = NULL) = 0;
// synchronize interface
virtual int debug(google::protobuf::Message* req,
google::protobuf::Message* res,
butil::IOBufBuilder* debug_os) = 0;
};
```
### 5.3 OP相关接口
```C++
class Op {
// ------Getters for Channel/Data/Message of dependent OP-----
// Get the Channel object of dependent OP
Channel* mutable_depend_channel(const std::string& op);
// Get the Channel object of dependent OP
const Channel* get_depend_channel(const std::string& op) const;
template <typename T>
T* mutable_depend_argument(const std::string& op);
template <typename T>
const T* get_depend_argument(const std::string& op) const;
// -----Getters for Channel/Data/Message of current OP----
// Get pointer to the progobuf message of current OP
google::protobuf::Message* mutable_message();
// Get pointer to the protobuf message of current OP
const google::protobuf::Message* get_message() const;
// Get the template class data object of current OP
template <typename T>
T* mutable_data();
// Get the template class data object of current OP
template <typename T>
const T* get_data() const;
// ---------------- Other base class members ----------------
int init(Bus* bus,
Dag* dag,
uint32_t id,
const std::string& name,
const std::string& type,
void* conf);
int deinit();
int process(bool debug);
// Get the input object
const google::protobuf::Message* get_request_message();
const std::string& type() const;
uint32_t id() const;
// ------------------ OP Interface -------------------
// Get the derived Channel object of current OP
virtual Channel* mutable_channel() = 0;
// Get the derived Channel object of current OP
virtual const Channel* get_channel() const = 0;
// Release the derived Channel object of current OP
virtual int release_channel() = 0;
// Inference interface
virtual int inference() = 0;
// ------------------ Conf Interface -------------------
virtual void* create_config(const configure::DAGNode& conf) { return NULL; }
virtual void delete_config(void* conf) {}
virtual void set_config(void* conf) { return; }
// ------------------ Metric Interface -------------------
virtual void regist_metric() { return; }
};
```
### 5.4 框架相关接口
Service
```C++
class InferService {
public:
static const char* tag() { return "service"; }
int init(const configure::InferService& conf);
int deinit() { return 0; }
int reload();
const std::string& name() const;
const std::string& full_name() const { return _infer_service_format; }
// Execute each workflow serially
virtual int inference(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os = NULL);
int debug(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os);
};
class ParallelInferService : public InferService {
public:
// Execute workflows in parallel
int inference(const google::protobuf::Message* request,
google::protobuf::Message* response,
butil::IOBufBuilder* debug_os) {
return 0;
}
};
```
ServerManager
```C++
class ServerManager {
public:
typedef google::protobuf::Service Service;
ServerManager();
static ServerManager& instance() {
static ServerManager server;
return server;
}
static bool reload_starting() { return _s_reload_starting; }
static void stop_reloader() { _s_reload_starting = false; }
int add_service_by_format(const std::string& format);
int start_and_wait();
};
```
DAG
```C++
class Dag {
public:
EdgeMode parse_mode(std::string& mode); // NOLINT
int init(const char* path, const char* file, const std::string& name);
int init(const configure::Workflow& conf, const std::string& name);
int deinit();
uint32_t nodes_size();
const DagNode* node_by_id(uint32_t id);
const DagNode* node_by_id(uint32_t id) const;
const DagNode* node_by_name(std::string& name); // NOLINT
const DagNode* node_by_name(const std::string& name) const;
uint32_t stage_size();
const DagStage* stage_by_index(uint32_t index);
const std::string& name() const { return _dag_name; }
const std::string& full_name() const { return _dag_name; }
void regist_metric(const std::string& service_name);
};
```
Workflow
```C++
class Workflow {
public:
Workflow() {}
static const char* tag() { return "workflow"; }
// Each workflow object corresponds to an independent
// configure file, so you can share the object between
// different apps.
int init(const configure::Workflow& conf);
DagView* fetch_dag_view(const std::string& service_name);
int deinit() { return 0; }
void return_dag_view(DagView* view);
int reload();
const std::string& name() { return _name; }
const std::string& full_name() { return _name; }
};
```
此差异已折叠。
# Paddle Serving设计文档
(简体中文|[English](./DESIGN_DOC.md))
## 1. 整体设计目标
- 长期使命:Paddle Serving是一个PaddlePaddle开源的在线服务框架,长期目标就是围绕着人工智能落地的最后一公里提供越来越专业、可靠、易用的服务。
- 工业级:为了达到工业级深度学习模型在线部署的要求,
Paddle Serving提供很多大规模场景需要的部署功能:1)分布式稀疏参数索引功能;2)高并发底层通信能力;3)模型管理、在线A/B流量测试、模型热加载。
- 简单易用:为了让使用Paddle的用户能够以极低的成本部署模型,PaddleServing设计了一套与Paddle训练框架无缝打通的预测部署API,普通模型可以使用一行命令进行服务部署。
- 功能扩展:当前,Paddle Serving支持C++、Python、Golang的客户端,未来也会面向不同类型的客户新增多种语言的客户端。在Paddle Serving的框架设计方面,尽管当前Paddle Serving以支持Paddle模型的部署为核心功能,
用户可以很容易嵌入其他的机器学习库部署在线预测。
## 2. 模块设计与实现
### 2.1 Python API接口设计
#### 2.1.1 训练模型的保存
Paddle的模型预测需要重点关注的内容:1)模型的输入变量;2)模型的输出变量;3)模型结构和模型参数。Paddle Serving Python API提供用户可以在训练过程中保存模型的接口,并将Paddle Serving在部署阶段需要保存的配置打包保存,一个示例如下:
``` python
import paddle_serving_client.io as serving_io
serving_io.save_model("serving_model", "client_conf",
{"words": data}, {"prediction": prediction},
fluid.default_main_program())
```
代码示例中,`{"words": data}``{"prediction": prediction}`分别指定了模型的输入和输出,`"words"``"prediction"`是输出和输出变量的别名,设计别名的目的是为了使开发者能够记忆自己训练模型的输入输出对应的字段。`data``prediction`则是Paddle训练过程中的`[Variable](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Variable_cn.html#variable)`,通常代表张量([Tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Tensor_cn.html#tensor))或变长张量([LodTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/lod_tensor.html#lodtensor))。调用保存命令后,会按照用户指定的`"serving_model"``"client_conf"`生成两个目录,内容如下:
``` shell
.
├── client_conf
│   ├── serving_client_conf.prototxt
│   └── serving_client_conf.stream.prototxt
└── serving_model
├── embedding_0.w_0
├── fc_0.b_0
├── fc_0.w_0
├── fc_1.b_0
├── fc_1.w_0
├── fc_2.b_0
├── fc_2.w_0
├── lstm_0.b_0
├── lstm_0.w_0
├── __model__
├── serving_server_conf.prototxt
└── serving_server_conf.stream.prototxt
```
其中,`"serving_client_conf.prototxt"``"serving_server_conf.prototxt"`是Paddle Serving的Client和Server端需要加载的配置,`"serving_client_conf.stream.prototxt"``"serving_server_conf.stream.prototxt"`是配置文件的二进制形式。`"serving_model"`下保存的其他内容和Paddle保存的模型文件是一致的。我们会考虑未来在Paddle框架中直接保存可服务的配置,实现配置保存对用户无感。
#### 2.1.2 服务端模型加载
服务端的预测逻辑可以通过Paddle Serving Server端的API进行人工定义,一个例子:
``` python
import paddle_serving_server as serving
op_maker = serving.OpMaker()
read_op = op_maker.create('general_reader')
dist_kv_op = op_maker.create('general_dist_kv')
general_infer_op = op_maker.create('general_infer')
general_response_op = op_maker.create('general_response')
op_seq_maker = serving.OpSeqMaker()
op_seq_maker.add_op(read_op)
op_seq_maker.add_op(dist_kv_op)
op_seq_maker.add_op(general_infer_op)
op_seq_maker.add_op(general_response_op)
```
当前Paddle Serving在Server端支持的主要Op请参考如下列表:
<center>
| Op 名称 | 描述 |
|--------------|------|
| `general_reader` | 通用数据格式的读取Op |
| `genreal_infer` | 通用数据格式的Paddle预测Op |
| `general_response` | 通用数据格式的响应Op |
| `general_dist_kv` | 分布式索引Op |
</center>
当前Paddle Serving中的预估引擎支持在CPU/GPU上进行预测,对应的预测服务安装包以及镜像也有两个。但无论是CPU上进行模型预估还是GPU上进行模型预估,普通模型的预测都可用一行命令进行启动。
``` shell
python -m paddle_serving_server.serve --model your_servable_model --thread 10 --port 9292
```
``` shell
python -m paddle_serving_server_gpu.serve --model your_servable_model --thread 10 --port 9292
```
启动命令的选项列表如下:
<center>
| 参数 | 类型 | 默认值 | 描述 |
|--------------|------|-----------|--------------------------------|
| `thread` | int | `4` | 服务端的并发数,通常与CPU核数一致即可 |
| `port` | int | `9292` | 服务暴露给用户的端口 |
| `name` | str | `""` | 服务名称,当用户指定时代表直接启动的是HTTP服务 |
| `model` | str | `""` | 服务端模型文件夹路径 |
| `gpu_ids` | str | `""` | 仅在paddle_serving_server_gpu中可以使用,功能与CUDA_VISIBLE_DEVICES一致 |
</center>
举例`python -m paddle_serving_server.serve --model your_servable_model --thread 10 --port 9292`对应到具体的Server端具体配置如下
``` python
from paddle_serving_server import OpMaker, OpSeqMaker, Server
op_maker = OpMaker()
read_op = op_maker.create('general_reader')
general_infer_op = op_maker.create('general_infer')
general_response_op = op_maker.create('general_response')
op_seq_maker = OpSeqMaker()
op_seq_maker.add_op(read_op)
op_seq_maker.add_op(general_infer_op)
op_seq_maker.add_op(general_response_op)
server = Server()
server.set_op_sequence(op_seq_maker.get_op_sequence())
server.set_num_threads(10)
server.load_model_config(your_servable_model)
server.prepare_server(port=9292, device="cpu")
server.run_server()
```
#### 2.1.3 客户端访问API
Paddle Serving支持远程服务访问的协议一种是基于RPC,另一种是HTTP。用户通过RPC访问,可以使用Paddle Serving提供的Python Client API,通过定制输入数据的格式来实现服务访问。下面的例子解释Paddle Serving Client如何定义输入数据。保存可部署模型时需要指定每个输入的别名,例如`sparse``dense`,对应的数据可以是离散的ID序列`[1, 1001, 100001]`,也可以是稠密的向量`[0.2, 0.5, 0.1, 0.4, 0.11, 0.22]`。当前Client的设计,对于离散的ID序列,支持Paddle中的`lod_level=0``lod_level=1`的情况,即张量以及一维变长张量。对于稠密的向量,支持`N-D Tensor`。用户不需要显式指定输入数据的形状,Paddle Serving的Client API会通过保存配置时记录的输入形状进行对应的检查。
``` python
feed_dict["sparse"] = [1, 1001, 100001]
feed_dict["dense"] = [0.2, 0.5, 0.1, 0.4, 0.11, 0.22]
fetch_map = client.predict(feed=feed_dict, fetch=["prob"])
```
Client链接Server的代码,通常只需要加载保存模型时保存的Client端配置,以及指定要去访问的服务端点即可。为了保持内部访问进行数据并行的扩展能力,Paddle Serving Client允许定义多个服务端点。
``` python
client = Client()
client.load_client_config('servable_client_configs')
client.connect(["127.0.0.1:9292"])
```
### 2.2 底层通信机制
Paddle Serving采用[baidu-rpc](https://github.com/apache/incubator-brpc)进行底层的通信。baidu-rpc是百度开源的一款PRC通信库,具有高并发、低延时等特点,已经支持了包括百度在内上百万在线预估实例、上千个在线预估服务,稳定可靠。
### 2.3 核心执行引擎
Paddle Serving的核心执行引擎是一个有向无环图,图中的每个节点代表预估服务的一个环节,例如计算模型预测打分就是其中一个环节。有向无环图有利于可并发节点充分利用部署实例内的计算资源,缩短延时。一个例子,当同一份输入需要送入两个不同的模型进行预估,并将两个模型预估的打分进行加权求和时,两个模型的打分过程即可以通过有向无环图的拓扑关系并发。
<p align="center">
<br>
<img src='design_doc.png'">
<br>
<p>
### 2.4 微服务插件模式
由于Paddle Serving底层采用基于C++的通信组件,并且核心框架也是基于C/C++编写,当用户想要在服务端定义复杂的前处理与后处理逻辑时,一种办法是修改Paddle Serving底层框架,重新编译源码。另一种方式可以通过在服务端嵌入轻量级的Web服务,通过在Web服务中实现更复杂的预处理逻辑,从而搭建一套逻辑完整的服务。当访问量超过了Web服务能够接受的范围,开发者有足够的理由开发一些高性能的C++预处理逻辑,并嵌入到Serving的原生服务库中。Web服务和RPC服务的关系以及他们的组合方式可以参考下文`用户类型`中的说明。
## 3. 工业级特性
### 3.1 分布式稀疏参数索引
分布式稀疏参数索引通常在广告推荐中出现,并与分布式训练配合形成完整的离线-在线一体化部署。下图解释了其中的流程,产品的在线服务接受用户请求后将请求发送给预估服务,同时系统会记录用户的请求以进行相应的训练日志处理和拼接。离线分布式训练系统会针对流式产出的训练日志进行模型增量训练,而增量产生的模型会配送至分布式稀疏参数索引服务,同时对应的稠密的模型参数也会配送至在线的预估服务。在线服务由两部分组成,一部分是针对用户的请求提取特征后,将需要进行模型的稀疏参数索引的特征发送请求给分布式稀疏参数索引服务,针对分布式稀疏参数索引服务返回的稀疏参数再进行后续深度学习模型的计算流程,从而完成预估。
<p align="center">
<br>
<img src='cube_eng.png' width = "450" height = "230">
<br>
<p>
为什么要使用Paddle Serving提供的分布式稀疏参数索引服务?1)在一些推荐场景中,模型的输入特征规模通常可以达到上千亿,单台机器无法支撑T级别模型在内存的保存,因此需要进行分布式存储。2)Paddle Serving提供的分布式稀疏参数索引服务,具有并发请求多个节点的能力,从而以较低的延时完成预估服务。
### 3.2 模型管理、在线A/B流量测试、模型热加载
Paddle Serving的C++引擎支持模型管理、在线A/B流量测试、模型热加载等功能,当前在Python API还有没完全开放这部分功能的配置,敬请期待。
## 4. 用户类型
Paddle Serving面向的用户提供RPC和HTTP两种访问协议。对于HTTP协议,我们更倾向于流量中小型的服务使用,并且对延时没有严格要求的AI服务开发者。对于RPC协议,我们面向流量较大,对延时要求更高的用户,此外RPC的客户端可能也处在一个大系统的服务中,这种情况下非常适合使用Paddle Serving提供的RPC服务。对于使用分布式稀疏参数索引服务而言,Paddle Serving的用户不需要关心底层的细节,其调用本质也是通过RPC服务再调用RPC服务。下图给出了当前设计的Paddle Serving可能会使用Serving服务的几种场景。
<p align="center">
<br>
<img src='user_groups.png' width = "700" height = "470">
<br>
<p>
对于普通的模型而言(具体指通过Serving提供的IO保存的模型,并且没有对模型进行后处理),用户使用RPC服务不需要额外的开发即可实现服务启动,但需要开发一些Client端的代码来使用服务。对于Web服务的开发,需要用户现在Paddle Serving提供的Web Service框架中进行前后处理的开发,从而实现整个HTTP服务。
### 4.1 Web服务开发
Web服务有很多开源的框架,Paddle Serving当前集成了Flask框架,但这部分对用户不可见,在未来可能会提供性能更好的Web框架作为底层HTTP服务集成引擎。用户需要继承WebService,从而实现对rpc服务的输入输出进行加工的目的。
``` python
from paddle_serving_server.web_service import WebService
from imdb_reader import IMDBDataset
import sys
class IMDBService(WebService):
def prepare_dict(self, args={}):
if len(args) == 0:
exit(-1)
self.dataset = IMDBDataset()
self.dataset.load_resource(args["dict_file_path"])
def preprocess(self, feed={}, fetch=[]):
if "words" not in feed:
exit(-1)
res_feed = {}
res_feed["words"] = self.dataset.get_words_only(feed["words"])[0]
return res_feed, fetch
imdb_service = IMDBService(name="imdb")
imdb_service.load_model_config(sys.argv[1])
imdb_service.prepare_server(
workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu")
imdb_service.prepare_dict({"dict_file_path": sys.argv[4]})
imdb_service.run_server()
```
`WebService`作为基类,提供将用户接受的HTTP请求转化为RPC输入的接口`preprocess`,同时提供对RPC请求返回的结果进行后处理的接口`postprocess`,继承`WebService`的子类,可以定义各种类型的成员函数。`WebService`的启动命令和普通RPC服务提供的启动API一致。
## 5. 未来计划
### 5.1 有向无环图结构定义开放
当前版本开放的python API仅支持用户定义Sequential类型的执行流,如果想要进行Server进程内复杂的计算,需要增加对应的用户API。
### 5.2 云端自动部署能力
为了方便用户更容易将Paddle的预测模型部署到线上,Paddle Serving在接下来的版本会提供Kubernetes生态下任务编排的工具。
### 5.3 向量检索、树结构检索
在推荐与广告场景的召回系统中,通常需要采用基于向量的快速检索或者基于树结构的快速检索,Paddle Serving会对这方面的检索引擎进行集成或扩展。
# Paddle Serving
([简体中文](./README_CN.md)|English)
Paddle Serving is PaddlePaddle's online estimation service framework, which can help developers easily implement remote prediction services that call deep learning models from mobile and server ends. At present, Paddle Serving is mainly based on models that support PaddlePaddle training. It can be used in conjunction with the Paddle training framework to quickly deploy inference services. Paddle Serving is designed around common industrial-level deep learning model deployment scenarios. Some common functions include multi-model management, model hot loading, [Baidu-rpc](https://github.com/apache/incubator-brpc)-based high-concurrency low-latency response capabilities, and online model A/B tests. The API that cooperates with the Paddle training framework can enable users to seamlessly transition between training and remote deployment, improving the landing efficiency of deep learning models.
------------
## Quick Start
Paddle Serving's current develop version supports lightweight Python API for fast predictions, and training with Paddle can get through. We take the most classic Boston house price prediction as an example to fully explain the process of model training on a single machine and model deployment using Paddle Serving.
#### Install
It is highly recommended that you build Paddle Serving inside Docker, please read [How to run PaddleServing in Docker](RUN_IN_DOCKER.md)
```
pip install paddle-serving-client
pip install paddle-serving-server
```
#### Training Script
``` python
import sys
import paddle
import paddle.fluid as fluid
train_reader = paddle.batch(paddle.reader.shuffle(
paddle.dataset.uci_housing.train(), buf_size=500), batch_size=16)
test_reader = paddle.batch(paddle.reader.shuffle(
paddle.dataset.uci_housing.test(), buf_size=500), batch_size=16)
x = fluid.data(name='x', shape=[None, 13], dtype='float32')
y = fluid.data(name='y', shape=[None, 1], dtype='float32')
y_predict = fluid.layers.fc(input=x, size=1, act=None)
cost = fluid.layers.square_error_cost(input=y_predict, label=y)
avg_loss = fluid.layers.mean(cost)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.01)
sgd_optimizer.minimize(avg_loss)
place = fluid.CPUPlace()
feeder = fluid.DataFeeder(place=place, feed_list=[x, y])
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
import paddle_serving_client.io as serving_io
for pass_id in range(30):
for data_train in train_reader():
avg_loss_value, = exe.run(
fluid.default_main_program(),
feed=feeder.feed(data_train),
fetch_list=[avg_loss])
serving_io.save_model(
"serving_server_model", "serving_client_conf",
{"x": x}, {"y": y_predict}, fluid.default_main_program())
```
#### Server Side Code
``` python
import sys
from paddle_serving.serving_server import OpMaker
from paddle_serving.serving_server import OpSeqMaker
from paddle_serving.serving_server import Server
op_maker = OpMaker()
read_op = op_maker.create('general_reader')
general_infer_op = op_maker.create('general_infer')
op_seq_maker = OpSeqMaker()
op_seq_maker.add_op(read_op)
op_seq_maker.add_op(general_infer_op)
server = Server()
server.set_op_sequence(op_seq_maker.get_op_sequence())
server.load_model_config(sys.argv[1])
server.prepare_server(workdir="work_dir1", port=9393, device="cpu")
server.run_server()
```
#### Launch Server End
``` shell
python test_server.py serving_server_model
```
#### Client Prediction
``` python
from paddle_serving_client import Client
import paddle
import sys
client = Client()
client.load_client_config(sys.argv[1])
client.connect(["127.0.0.1:9292"])
test_reader = paddle.batch(paddle.reader.shuffle(
paddle.dataset.uci_housing.test(), buf_size=500), batch_size=1)
for data in test_reader():
fetch_map = client.predict(feed={"x": data[0][0]}, fetch=["y"])
print("{} {}".format(fetch_map["y"][0], data[0][1][0]))
```
### Document
[Design Doc](DESIGN.md)
[FAQ](./deprecated/FAQ.md)
### Senior Developer Guildlines
[Compile Tutorial](COMPILE.md)
## Contribution
If you want to make contributions to Paddle Serving Please refer to [CONRTIBUTE](CONTRIBUTE.md)
# Paddle Serving # Paddle Serving
(简体中文|[English](./README.md))
Paddle Serving是PaddlePaddle的在线预估服务框架,能够帮助开发者轻松实现从移动端、服务器端调用深度学习模型的远程预测服务。当前Paddle Serving以支持PaddlePaddle训练的模型为主,可以与Paddle训练框架联合使用,快速部署预估服务。Paddle Serving围绕常见的工业级深度学习模型部署场景进行设计,一些常见的功能包括多模型管理、模型热加载、基于[Baidu-rpc](https://github.com/apache/incubator-brpc)的高并发低延迟响应能力、在线模型A/B实验等。与Paddle训练框架互相配合的API可以使用户在训练与远程部署之间无缝过度,提升深度学习模型的落地效率。 Paddle Serving是PaddlePaddle的在线预估服务框架,能够帮助开发者轻松实现从移动端、服务器端调用深度学习模型的远程预测服务。当前Paddle Serving以支持PaddlePaddle训练的模型为主,可以与Paddle训练框架联合使用,快速部署预估服务。Paddle Serving围绕常见的工业级深度学习模型部署场景进行设计,一些常见的功能包括多模型管理、模型热加载、基于[Baidu-rpc](https://github.com/apache/incubator-brpc)的高并发低延迟响应能力、在线模型A/B实验等。与Paddle训练框架互相配合的API可以使用户在训练与远程部署之间无缝过度,提升深度学习模型的落地效率。
------------ ------------
...@@ -10,7 +12,7 @@ Paddle Serving当前的develop版本支持轻量级Python API进行快速预测 ...@@ -10,7 +12,7 @@ Paddle Serving当前的develop版本支持轻量级Python API进行快速预测
#### 安装 #### 安装
强烈建议您在Docker内构建Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md) 强烈建议您在Docker内构建Paddle Serving,请查看[如何在Docker中运行PaddleServing](RUN_IN_DOCKER_CN.md)
``` ```
pip install paddle-serving-client pip install paddle-serving-client
...@@ -105,13 +107,13 @@ for data in test_reader(): ...@@ -105,13 +107,13 @@ for data in test_reader():
### 文档 ### 文档
[设计文档](doc/DESIGN.md) [设计文档](DESIGN_CN.md)
[FAQ](doc/FAQ.md) [FAQ](./deprecated/FAQ.md)
### 资深开发者使用指南 ### 资深开发者使用指南
[编译指南](doc/INSTALL.md) [编译指南](COMPILE_CN.md)
## 贡献 ## 贡献
如果你想要给Paddle Serving做贡献,请参考[贡献指南](doc/CONTRIBUTE.md) 如果你想要给Paddle Serving做贡献,请参考[贡献指南](CONTRIBUTE.md)
# How to run PaddleServing in Docker # How to run PaddleServing in Docker
([简体中文](./RUN_IN_DOCKER_CN.md)|English)
## Requirements ## Requirements
Docker (GPU version requires nvidia-docker to be installed on the GPU machine) Docker (GPU version requires nvidia-docker to be installed on the GPU machine)
......
# 如何在Docker中运行PaddleServing # 如何在Docker中运行PaddleServing
(简体中文|[English](RUN_IN_DOCKER.md))
## 环境要求 ## 环境要求
Docker(GPU版本需要在GPU机器上安装nvidia-docker) Docker(GPU版本需要在GPU机器上安装nvidia-docker)
......
此差异已折叠。
...@@ -57,8 +57,7 @@ def single_func(idx, resource): ...@@ -57,8 +57,7 @@ def single_func(idx, resource):
os.getpid(), os.getpid(),
int(round(b_start * 1000000)), int(round(b_start * 1000000)),
int(round(b_end * 1000000)))) int(round(b_end * 1000000))))
result = client.batch_predict( result = client.predict(feed=feed_batch, fetch=fetch)
feed_batch=feed_batch, fetch=fetch)
else: else:
print("unsupport batch size {}".format(args.batch_size)) print("unsupport batch size {}".format(args.batch_size))
......
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=doc-string-missing
import os
import sys
from paddle_serving_server_gpu import OpMaker
from paddle_serving_server_gpu import OpSeqMaker
from paddle_serving_server_gpu import Server
op_maker = OpMaker()
read_op = op_maker.create('general_reader')
general_dist_kv_infer_op = op_maker.create('general_dist_kv_infer')
response_op = op_maker.create('general_response')
op_seq_maker = OpSeqMaker()
op_seq_maker.add_op(read_op)
op_seq_maker.add_op(general_dist_kv_infer_op)
op_seq_maker.add_op(response_op)
server = Server()
server.set_op_sequence(op_seq_maker.get_op_sequence())
server.set_num_threads(4)
server.load_model_config(sys.argv[1])
server.prepare_server(workdir="work_dir1", port=9292, device="cpu")
server.run_server()
...@@ -55,6 +55,7 @@ class OpMaker(object): ...@@ -55,6 +55,7 @@ class OpMaker(object):
"general_text_reader": "GeneralTextReaderOp", "general_text_reader": "GeneralTextReaderOp",
"general_text_response": "GeneralTextResponseOp", "general_text_response": "GeneralTextResponseOp",
"general_single_kv": "GeneralSingleKVOp", "general_single_kv": "GeneralSingleKVOp",
"general_dist_kv_infer": "GeneralDistKVInferOp",
"general_dist_kv": "GeneralDistKVOp" "general_dist_kv": "GeneralDistKVOp"
} }
...@@ -104,6 +105,7 @@ class Server(object): ...@@ -104,6 +105,7 @@ class Server(object):
self.infer_service_fn = "infer_service.prototxt" self.infer_service_fn = "infer_service.prototxt"
self.model_toolkit_fn = "model_toolkit.prototxt" self.model_toolkit_fn = "model_toolkit.prototxt"
self.general_model_config_fn = "general_model.prototxt" self.general_model_config_fn = "general_model.prototxt"
self.cube_config_fn = "cube.conf"
self.workdir = "" self.workdir = ""
self.max_concurrency = 0 self.max_concurrency = 0
self.num_threads = 4 self.num_threads = 4
...@@ -184,6 +186,11 @@ class Server(object): ...@@ -184,6 +186,11 @@ class Server(object):
"w") as fout: "w") as fout:
fout.write(str(self.model_conf)) fout.write(str(self.model_conf))
self.resource_conf = server_sdk.ResourceConf() self.resource_conf = server_sdk.ResourceConf()
for workflow in self.workflow_conf.workflows:
for node in workflow.nodes:
if "dist_kv" in node.name:
self.resource_conf.cube_config_path = workdir
self.resource_conf.cube_config_file = self.cube_config_fn
self.resource_conf.model_toolkit_path = workdir self.resource_conf.model_toolkit_path = workdir
self.resource_conf.model_toolkit_file = self.model_toolkit_fn self.resource_conf.model_toolkit_file = self.model_toolkit_fn
self.resource_conf.general_model_path = workdir self.resource_conf.general_model_path = workdir
......
...@@ -48,6 +48,30 @@ function rerun() { ...@@ -48,6 +48,30 @@ function rerun() {
exit 1 exit 1
} }
function build_app() {
local TYPE=$1
local DIRNAME=build-app-$TYPE
mkdir $DIRNAME # pwd: /Serving
cd $DIRNAME # pwd: /Serving/build-app-$TYPE
pip install numpy sentencepiece
case $TYPE in
CPU|GPU)
cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ \
-DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so \
-DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python \
-DAPP=ON ..
rerun "make -j2 >/dev/null" 3 # due to some network reasons, compilation may fail
pip install -U python/dist/paddle_serving_app* >/dev/null
;;
*)
echo "error type"
exit 1
;;
esac
echo "build app $TYPE part finished as expected."
cd .. # pwd: /Serving
}
function build_client() { function build_client() {
local TYPE=$1 local TYPE=$1
local DIRNAME=build-client-$TYPE local DIRNAME=build-client-$TYPE
...@@ -145,7 +169,7 @@ function python_test_fit_a_line() { ...@@ -145,7 +169,7 @@ function python_test_fit_a_line() {
sleep 5 # wait for the server to start sleep 5 # wait for the server to start
check_cmd "python test_client.py uci_housing_client/serving_client_conf.prototxt > /dev/null" check_cmd "python test_client.py uci_housing_client/serving_client_conf.prototxt > /dev/null"
kill_server_process kill_server_process
# test web # test web
unsetproxy # maybe the proxy is used on iPipe, which makes web-test failed. unsetproxy # maybe the proxy is used on iPipe, which makes web-test failed.
check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9393 --thread 2 --gpu_ids 0 --name uci > /dev/null &" check_cmd "python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9393 --thread 2 --gpu_ids 0 --name uci > /dev/null &"
...@@ -184,14 +208,14 @@ function python_run_criteo_ctr_with_cube() { ...@@ -184,14 +208,14 @@ function python_run_criteo_ctr_with_cube() {
check_cmd "mv models/ctr_serving_model_kv ./" check_cmd "mv models/ctr_serving_model_kv ./"
check_cmd "mv models/data ./cube/" check_cmd "mv models/data ./cube/"
check_cmd "mv models/ut_data ./" check_cmd "mv models/ut_data ./"
cp ../../../build-server-$TYPE/output/bin/cube* ./cube/ cp ../../../build-server-$TYPE/output/bin/cube* ./cube/
mkdir -p $PYTHONROOT/lib/python2.7/site-packages/paddle_serving_server/serving-cpu-avx-openblas-0.1.3/ mkdir -p $PYTHONROOT/lib/python2.7/site-packages/paddle_serving_server/serving-cpu-avx-openblas-0.1.3/
yes | cp ../../../build-server-$TYPE/output/demo/serving/bin/serving $PYTHONROOT/lib/python2.7/site-packages/paddle_serving_server/serving-cpu-avx-openblas-0.1.3/ yes | cp ../../../build-server-$TYPE/output/demo/serving/bin/serving $PYTHONROOT/lib/python2.7/site-packages/paddle_serving_server/serving-cpu-avx-openblas-0.1.3/
sh cube_prepare.sh & sh cube_prepare.sh &
check_cmd "mkdir work_dir1 && cp cube/conf/cube.conf ./work_dir1/" check_cmd "mkdir work_dir1 && cp cube/conf/cube.conf ./work_dir1/"
python test_server.py ctr_serving_model_kv & python test_server.py ctr_serving_model_kv &
check_cmd "python test_client.py ctr_client_conf/serving_client_conf.prototxt ./ut_data >score" check_cmd "python test_client.py ctr_client_conf/serving_client_conf.prototxt ./ut_data >score"
tail -n 2 score tail -n 2 score | awk 'NR==1'
AUC=$(tail -n 2 score | awk 'NR==1') AUC=$(tail -n 2 score | awk 'NR==1')
VAR2="0.67" #TODO: temporarily relax the threshold to 0.67 VAR2="0.67" #TODO: temporarily relax the threshold to 0.67
RES=$( echo "$AUC>$VAR2" | bc ) RES=$( echo "$AUC>$VAR2" | bc )
...@@ -219,7 +243,7 @@ function python_run_criteo_ctr_with_cube() { ...@@ -219,7 +243,7 @@ function python_run_criteo_ctr_with_cube() {
check_cmd "python test_client.py ctr_client_conf/serving_client_conf.prototxt ./ut_data >score" check_cmd "python test_client.py ctr_client_conf/serving_client_conf.prototxt ./ut_data >score"
tail -n 2 score | awk 'NR==1' tail -n 2 score | awk 'NR==1'
AUC=$(tail -n 2 score | awk 'NR==1') AUC=$(tail -n 2 score | awk 'NR==1')
VAR2="0.67" VAR2="0.67" #TODO: temporarily relax the threshold to 0.67
RES=$( echo "$AUC>$VAR2" | bc ) RES=$( echo "$AUC>$VAR2" | bc )
if [[ $RES -eq 0 ]]; then if [[ $RES -eq 0 ]]; then
echo "error with criteo_ctr_with_cube inference auc test, auc should > 0.70" echo "error with criteo_ctr_with_cube inference auc test, auc should > 0.70"
...@@ -253,6 +277,7 @@ function main() { ...@@ -253,6 +277,7 @@ function main() {
init # pwd: /Serving init # pwd: /Serving
build_client $TYPE # pwd: /Serving build_client $TYPE # pwd: /Serving
build_server $TYPE # pwd: /Serving build_server $TYPE # pwd: /Serving
build_app $TYPE # pwd: /Serving
python_run_test $TYPE # pwd: /Serving python_run_test $TYPE # pwd: /Serving
echo "serving $TYPE part finished as expected." echo "serving $TYPE part finished as expected."
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册