diff --git a/README.md b/README.md index 297a6cf7901d0f18418b573a622a55f33bbed2cc..02187085126e57a98647aedb2e50c2446995a6a5 100644 --- a/README.md +++ b/README.md @@ -18,19 +18,19 @@
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+ +``` shell +curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg", "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction +``` +- **返回结果示例**: +``` shell +{"label":"daisy","prob":0.9341403245925903} +``` + +
diff --git a/doc/ABTEST_IN_PADDLE_SERVING_CN.md b/doc/ABTEST_IN_PADDLE_SERVING_CN.md
index d31ddba6f72dfc23fa15defeda23468ab1785e62..e32bf783fcde20bb5dff3d2addaf764838975a81 100644
--- a/doc/ABTEST_IN_PADDLE_SERVING_CN.md
+++ b/doc/ABTEST_IN_PADDLE_SERVING_CN.md
@@ -1,5 +1,7 @@
# 如何使用Paddle Serving做ABTEST
+(简体中文|[English](./ABTEST_IN_PADDLE_SERVING.md))
+
该文档将会用一个基于IMDB数据集的文本分类任务的例子,介绍如何使用Paddle Serving搭建A/B Test框架,例中的Client端、Server端结构如下图所示。
diff --git a/doc/BERT_10_MINS.md b/doc/BERT_10_MINS.md
index 706c14bb116beca04eca94c27cc4f15f424ef0de..e668b3207c5228309d131e2353e815d26c8d4625 100644
--- a/doc/BERT_10_MINS.md
+++ b/doc/BERT_10_MINS.md
@@ -1,11 +1,14 @@
-## 十分钟构建Bert-As-Service
+## Build Bert-As-Service in 10 minutes
-Bert-As-Service的目标是给定一个句子,服务可以将句子表示成一个语义向量返回给用户。[Bert模型](https://arxiv.org/abs/1810.04805)是目前NLP领域的热门模型,在多种公开的NLP任务上都取得了很好的效果,使用Bert模型计算出的语义向量来做其他NLP模型的输入对提升模型的表现也有很大的帮助。Bert-As-Service可以让用户很方便地获取文本的语义向量表示并应用到自己的任务中。为了实现这个目标,我们通过四个步骤说明使用Paddle Serving在十分钟内就可以搭建一个这样的服务。示例中所有的代码和文件均可以在Paddle Serving的[示例](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert)中找到。
+([简体中文](./BERT_10_MINS_CN.md)|English)
-#### Step1:保存可服务模型
+The goal of Bert-As-Service is to give a sentence, and the service can represent the sentence as a semantic vector and return it to the user. [Bert model](https://arxiv.org/abs/1810.04805) is a popular model in the current NLP field. It has achieved good results on a variety of public NLP tasks. The semantic vector calculated by the Bert model is used as input to other NLP models, which will also greatly improve the performance of the model. Bert-As-Service allows users to easily obtain the semantic vector representation of text and apply it to their own tasks. In order to achieve this goal, we have shown in four steps that using Paddle Serving can build such a service in ten minutes. All the code and files in the example can be found in [Example](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert) of Paddle Serving.
-Paddle Serving支持基于Paddle进行训练的各种模型,并通过指定模型的输入和输出变量来保存可服务模型。为了方便,我们可以从paddlehub加载一个已经训练好的bert中文模型,并利用两行代码保存一个可部署的服务,服务端和客户端的配置分别放在`bert_seq20_model`和`bert_seq20_client`文件夹。
+#### Step1: Save the serviceable model
+Paddle Serving supports various models trained based on Paddle, and saves the serviceable model by specifying the input and output variables of the model. For convenience, we can load a trained bert Chinese model from paddlehub and save a deployable service with two lines of code. The server and client configurations are placed in the `bert_seq20_model` and` bert_seq20_client` folders, respectively.
+
+[//file]:#bert_10.py
``` python
import paddlehub as hub
model_name = "bert_chinese_L-12_H-768_A-12"
@@ -23,33 +26,35 @@ serving_io.save_model("bert_seq20_model", "bert_seq20_client",
feed_dict, fetch_dict, program)
```
-#### Step2:启动服务
+#### Step2: Launch Service
+[//file]:#server.sh
``` shell
python -m paddle_serving_server_gpu.serve --model bert_seq20_model --thread 10 --port 9292 --gpu_ids 0
```
+| Parameters | Meaning |
+| ---------- | ---------------------------------------- |
+| model | server configuration and model file path |
+| thread | server-side threads |
+| port | server port number |
+| gpu_ids | GPU index number |
-| 参数 | 含义 |
-| ------- | -------------------------- |
-| model | server端配置与模型文件路径 |
-| thread | server端线程数 |
-| port | server端端口号 |
-| gpu_ids | GPU索引号 |
-
-#### Step3:客户端数据预处理逻辑
+#### Step3: data preprocessing logic on Client Side
-Paddle Serving内建了很多经典典型对应的数据预处理逻辑,对于中文Bert语义表示的计算,我们采用paddle_serving_app下的ChineseBertReader类进行数据预处理,开发者可以很容易获得一个原始的中文句子对应的多个模型输入字段。
+Paddle Serving has many built-in corresponding data preprocessing logics. For the calculation of Chinese Bert semantic representation, we use the ChineseBertReader class under paddle_serving_app for data preprocessing. Model input fields of multiple models corresponding to a raw Chinese sentence can be easily fetched by developers
-安装paddle_serving_app
+Install paddle_serving_app
+[//file]:#pip_app.sh
```shell
pip install paddle_serving_app
```
-#### Step4:客户端访问
+#### Step4: Client Visit Serving
-客户端脚本 bert_client.py内容如下
+the script of client side bert_client.py is as follow:
+[//file]:#bert_client.py
``` python
import os
import sys
@@ -68,16 +73,33 @@ for line in sys.stdin:
result = client.predict(feed=feed_dict, fetch=fetch)
```
-执行
+run
+[//file]:#bert_10_cli.sh
```shell
cat data.txt | python bert_client.py
```
-从data.txt文件中读取样例,并将结果打印到标准输出。
+read samples from data.txt, print results at the standard output.
-### 性能测试
+### Benchmark
-我们基于V100对基于Padde Serving研发的Bert-As-Service的性能进行测试并与基于Tensorflow实现的Bert-As-Service进行对比,从用户配置的角度,采用相同的batch size和并发数进行压力测试,得到4块V100下的整体吞吐性能数据如下。
+We tested the performance of Bert-As-Service based on Padde Serving based on V100 and compared it with the Bert-As-Service based on Tensorflow. From the perspective of user configuration, we used the same batch size and concurrent number for stress testing. The overall throughput performance data obtained under 4 V100s is as follows.

+
+
diff --git a/doc/BERT_10_MINS_CN.md b/doc/BERT_10_MINS_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..e7bca9d2885b421f46dcf7af74b8967fb6d8f2ec
--- /dev/null
+++ b/doc/BERT_10_MINS_CN.md
@@ -0,0 +1,85 @@
+## 十分钟构建Bert-As-Service
+
+(简体中文|[English](./BERT_10_MINS.md))
+
+Bert-As-Service的目标是给定一个句子,服务可以将句子表示成一个语义向量返回给用户。[Bert模型](https://arxiv.org/abs/1810.04805)是目前NLP领域的热门模型,在多种公开的NLP任务上都取得了很好的效果,使用Bert模型计算出的语义向量来做其他NLP模型的输入对提升模型的表现也有很大的帮助。Bert-As-Service可以让用户很方便地获取文本的语义向量表示并应用到自己的任务中。为了实现这个目标,我们通过四个步骤说明使用Paddle Serving在十分钟内就可以搭建一个这样的服务。示例中所有的代码和文件均可以在Paddle Serving的[示例](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert)中找到。
+
+#### Step1:保存可服务模型
+
+Paddle Serving支持基于Paddle进行训练的各种模型,并通过指定模型的输入和输出变量来保存可服务模型。为了方便,我们可以从paddlehub加载一个已经训练好的bert中文模型,并利用两行代码保存一个可部署的服务,服务端和客户端的配置分别放在`bert_seq20_model`和`bert_seq20_client`文件夹。
+
+``` python
+import paddlehub as hub
+model_name = "bert_chinese_L-12_H-768_A-12"
+module = hub.Module(model_name)
+inputs, outputs, program = module.context(
+ trainable=True, max_seq_len=20)
+feed_keys = ["input_ids", "position_ids", "segment_ids",
+ "input_mask", "pooled_output", "sequence_output"]
+fetch_keys = ["pooled_output", "sequence_output"]
+feed_dict = dict(zip(feed_keys, [inputs[x] for x in feed_keys]))
+fetch_dict = dict(zip(fetch_keys, [outputs[x]] for x in fetch_keys))
+
+import paddle_serving_client.io as serving_io
+serving_io.save_model("bert_seq20_model", "bert_seq20_client",
+ feed_dict, fetch_dict, program)
+```
+
+#### Step2:启动服务
+
+``` shell
+python -m paddle_serving_server_gpu.serve --model bert_seq20_model --thread 10 --port 9292 --gpu_ids 0
+```
+
+| 参数 | 含义 |
+| ------- | -------------------------- |
+| model | server端配置与模型文件路径 |
+| thread | server端线程数 |
+| port | server端端口号 |
+| gpu_ids | GPU索引号 |
+
+#### Step3:客户端数据预处理逻辑
+
+Paddle Serving内建了很多经典典型对应的数据预处理逻辑,对于中文Bert语义表示的计算,我们采用paddle_serving_app下的ChineseBertReader类进行数据预处理,开发者可以很容易获得一个原始的中文句子对应的多个模型输入字段。
+
+安装paddle_serving_app
+
+```shell
+pip install paddle_serving_app
+```
+
+#### Step4:客户端访问
+
+客户端脚本 bert_client.py内容如下
+
+``` python
+import os
+import sys
+from paddle_serving_client import Client
+from paddle_serving_app import ChineseBertReader
+
+reader = ChineseBertReader()
+fetch = ["pooled_output"]
+endpoint_list = ["127.0.0.1:9292"]
+client = Client()
+client.load_client_config("bert_seq20_client/serving_client_conf.prototxt")
+client.connect(endpoint_list)
+
+for line in sys.stdin:
+ feed_dict = reader.process(line)
+ result = client.predict(feed=feed_dict, fetch=fetch)
+```
+
+执行
+
+```shell
+cat data.txt | python bert_client.py
+```
+
+从data.txt文件中读取样例,并将结果打印到标准输出。
+
+### 性能测试
+
+我们基于V100对基于Padde Serving研发的Bert-As-Service的性能进行测试并与基于Tensorflow实现的Bert-As-Service进行对比,从用户配置的角度,采用相同的batch size和并发数进行压力测试,得到4块V100下的整体吞吐性能数据如下。
+
+
diff --git a/doc/COMPILE.md b/doc/COMPILE.md
index 986a136a8102f5fc154d04461f289b08bb0415ae..2858eb120d0f9d8157392a598faad2ef6cbafd87 100644
--- a/doc/COMPILE.md
+++ b/doc/COMPILE.md
@@ -1,33 +1,35 @@
-# 如何编译PaddleServing
+# How to compile PaddleServing
-## 编译环境设置
+([简体中文](./COMPILE_CN.md)|English)
+
+## Compilation environment requirements
- os: CentOS 6u3
-- gcc: 4.8.2及以上
-- go: 1.9.2及以上
-- git:2.17.1及以上
-- cmake:3.2.2及以上
-- python:2.7.2及以上
+- gcc: 4.8.2 and later
+- go: 1.9.2 and later
+- git:2.17.1 and later
+- cmake:3.2.2 and later
+- python:2.7.2 and later
-推荐使用Docker准备Paddle Serving编译环境:[CPU Dockerfile.devel](../tools/Dockerfile.devel),[GPU Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
+It is recommended to use Docker to prepare the compilation environment for the Paddle service: [CPU Dockerfile.devel](../tools/Dockerfile.devel), [GPU Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
-## 获取代码
+## Get Code
``` python
git clone https://github.com/PaddlePaddle/Serving
cd Serving && git submodule update --init --recursive
```
-## PYTHONROOT设置
+## PYTHONROOT Setting
```shell
-# 例如python的路径为/usr/bin/python,可以设置PYTHONROOT
+# for example, the path of python is /usr/bin/python, you can set /usr as PYTHONROOT
export PYTHONROOT=/usr/
```
-## 编译Server部分
+## Compile Server
-### 集成CPU版本Paddle Inference Library
+### Integrated CPU version paddle inference library
``` shell
mkdir build && cd build
@@ -35,9 +37,9 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make -j10
```
-可以执行`make install`把目标产出放在`./output`目录下,cmake阶段需添加`-DCMAKE_INSTALL_PREFIX=./output`选项来指定存放路径。
+you can execute `make install` to put targets under directory `./output`, you need to add`-DCMAKE_INSTALL_PREFIX=./output`to specify output path to cmake command shown above.
-### 集成GPU版本Paddle Inference Library
+### Integrated GPU version paddle inference library
``` shell
mkdir build && cd build
@@ -45,9 +47,9 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make -j10
```
-执行`make install`可以把目标产出放在`./output`目录下。
+execute `make install` to put targets under directory `./output`
-## 编译Client部分
+## Compile Client
``` shell
mkdir build && cd build
@@ -55,27 +57,28 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make -j10
```
-执行`make install`可以把目标产出放在`./output`目录下。
+execute `make install` to put targets under directory `./output`
-## 编译App部分
+## Compile the App
```bash
mkdir build && cd build
-cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCMAKE_INSTALL_PREFIX=./output -DAPP=ON ..
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DAPP=ON ..
make
```
-## 安装wheel包
+## Install wheel package
+
+Regardless of the client, server or App part, after compiling, install the whl package under `python/dist/`.
-无论是Client端,Server端还是App部分,编译完成后,安装`python/dist/`下的whl包即可。
+## Note
-## 注意事项
+When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`.
-运行python端Server时,会检查`SERVING_BIN`环境变量,如果想使用自己编译的二进制文件,请将设置该环境变量为对应二进制文件的路径,通常是`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`。
-## CMake选项说明
+## CMake Option Description
-| 编译选项 | 说明 | 默认 |
+| Compile Options | Description | Default |
| :--------------: | :----------------------------------------: | :--: |
| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF |
| WITH_MKL | Compile Paddle Serving with MKL support | OFF |
@@ -87,35 +90,36 @@ make
| WITH_ELASTIC_CTR | Compile ELASITC-CTR solution | OFF |
| PACK | Compile for whl | OFF |
-### WITH_GPU选项
+### WITH_GPU Option
-Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选项用于检测系统上CUDA/CUDNN等基础库,如检测到合适版本,在编译PaddlePaddle时就会编译出GPU版本的OP Kernel。
+Paddle Serving supports prediction on the GPU through the PaddlePaddle inference library. The WITH_GPU option is used to detect basic libraries such as CUDA/CUDNN on the system. If an appropriate version is detected, the GPU Kernel will be compiled when PaddlePaddle is compiled.
-在裸机上编译Paddle Serving GPU版本,需要安装这些基础库:
+To compile the Paddle Serving GPU version on bare metal, you need to install these basic libraries:
- CUDA
- CuDNN
- NCCL2
-这里要注意的是:
+Note here:
+
+1. The basic library versions such as CUDA/CUDNN installed on the system where Serving is compiled, needs to be compatible with the actual GPU device. For example, the Tesla V100 card requires at least CUDA 9.0. If the version of the basic library such as CUDA used during compilation is too low, the generated GPU code is not compatible with the actual hardware device, which will cause the Serving process to fail to start or serious problems such as coredump.
+2. Install the CUDA driver compatible with the actual GPU device on the system running Paddle Serving, and install the basic library compatible with the CUDA/CuDNN version used during compilation. If the version of CUDA/CuDNN installed on the system running Paddle Serving is lower than the version used at compile time, it may cause some cuda function call failures and other problems.
-1. 编译Serving所在的系统上所安装的CUDA/CUDNN等基础库版本,需要兼容实际的GPU设备。例如,Tesla V100卡至少要CUDA 9.0。如果编译时所用CUDA等基础库版本过低,由于生成的GPU代码和实际硬件设备不兼容,会导致Serving进程无法启动,或出现coredump等严重问题。
-2. 运行Paddle Serving的系统上安装与实际GPU设备兼容的CUDA driver,并安装与编译期所用的CUDA/CuDNN等版本兼容的基础库。如运行Paddle Serving的系统上安装的CUDA/CuDNN的版本低于编译时所用版本,可能会导致奇怪的cuda函数调用失败等问题。
-以下是PaddlePaddle发布版本所使用的基础库版本匹配关系,供参考:
+The following is the base library version matching relationship used by the PaddlePaddle release version for reference:
| | CUDA | CuDNN | NCCL2 |
| :----: | :-----: | :----------------------: | :----: |
| CUDA 8 | 8.0.61 | CuDNN 7.1.2 for CUDA 8.0 | 2.1.4 |
| CUDA 9 | 9.0.176 | CuDNN 7.3.1 for CUDA 9.0 | 2.2.12 |
-### 如何让Paddle Serving编译系统探测到CuDNN库
+### How to make the compiler detect the CuDNN library
-从NVIDIA developer官网下载对应版本CuDNN并在本地解压后,在cmake编译命令中增加`-DCUDNN_ROOT`参数,指定CuDNN库所在路径。
+Download the corresponding CUDNN version from NVIDIA developer official website and decompressing it, add `-DCUDNN_ROOT` to cmake command, to specify the path of CUDNN.
-### 如何让Paddle Serving编译系统探测到nccl库
+### How to make the compiler detect the nccl library
-从NVIDIA developer官网下载对应版本nccl2库并解压后,增加如下环境变量 (以nccl2.1.4为例):
+After downloading the corresponding version of the nccl2 library from the NVIDIA developer official website and decompressing it, add the following environment variables (take nccl2.1.4 as an example):
```shell
export C_INCLUDE_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/include:$C_INCLUDE_PATH
diff --git a/doc/COMPILE_CN.md b/doc/COMPILE_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..bbe509f7c09e9e9082f1e7a2bfa6b823af7c2cc0
--- /dev/null
+++ b/doc/COMPILE_CN.md
@@ -0,0 +1,126 @@
+# 如何编译PaddleServing
+
+(简体中文|[English](./COMPILE.md))
+
+## 编译环境设置
+
+- os: CentOS 6u3
+- gcc: 4.8.2及以上
+- go: 1.9.2及以上
+- git:2.17.1及以上
+- cmake:3.2.2及以上
+- python:2.7.2及以上
+
+推荐使用Docker准备Paddle Serving编译环境:[CPU Dockerfile.devel](../tools/Dockerfile.devel),[GPU Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
+
+## 获取代码
+
+``` python
+git clone https://github.com/PaddlePaddle/Serving
+cd Serving && git submodule update --init --recursive
+```
+
+## PYTHONROOT设置
+
+```shell
+# 例如python的路径为/usr/bin/python,可以设置PYTHONROOT
+export PYTHONROOT=/usr/
+```
+
+## 编译Server部分
+
+### 集成CPU版本Paddle Inference Library
+
+``` shell
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DSERVER=ON ..
+make -j10
+```
+
+可以执行`make install`把目标产出放在`./output`目录下,cmake阶段需添加`-DCMAKE_INSTALL_PREFIX=./output`选项来指定存放路径。
+
+### 集成GPU版本Paddle Inference Library
+
+``` shell
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DSERVER=ON -DWITH_GPU=ON ..
+make -j10
+```
+
+执行`make install`可以把目标产出放在`./output`目录下。
+
+## 编译Client部分
+
+``` shell
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCLIENT=ON ..
+make -j10
+```
+
+执行`make install`可以把目标产出放在`./output`目录下。
+
+## 编译App部分
+
+```bash
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCMAKE_INSTALL_PREFIX=./output -DAPP=ON ..
+make
+```
+
+## 安装wheel包
+
+无论是Client端,Server端还是App部分,编译完成后,安装`python/dist/`下的whl包即可。
+
+## 注意事项
+
+运行python端Server时,会检查`SERVING_BIN`环境变量,如果想使用自己编译的二进制文件,请将设置该环境变量为对应二进制文件的路径,通常是`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`。
+
+## CMake选项说明
+
+| 编译选项 | 说明 | 默认 |
+| :--------------: | :----------------------------------------: | :--: |
+| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF |
+| WITH_MKL | Compile Paddle Serving with MKL support | OFF |
+| WITH_GPU | Compile Paddle Serving with NVIDIA GPU | OFF |
+| CUDNN_ROOT | Define CuDNN library and header path | |
+| CLIENT | Compile Paddle Serving Client | OFF |
+| SERVER | Compile Paddle Serving Server | OFF |
+| APP | Compile Paddle Serving App package | OFF |
+| WITH_ELASTIC_CTR | Compile ELASITC-CTR solution | OFF |
+| PACK | Compile for whl | OFF |
+
+### WITH_GPU选项
+
+Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选项用于检测系统上CUDA/CUDNN等基础库,如检测到合适版本,在编译PaddlePaddle时就会编译出GPU版本的OP Kernel。
+
+在裸机上编译Paddle Serving GPU版本,需要安装这些基础库:
+
+- CUDA
+- CuDNN
+- NCCL2
+
+这里要注意的是:
+
+1. 编译Serving所在的系统上所安装的CUDA/CUDNN等基础库版本,需要兼容实际的GPU设备。例如,Tesla V100卡至少要CUDA 9.0。如果编译时所用CUDA等基础库版本过低,由于生成的GPU代码和实际硬件设备不兼容,会导致Serving进程无法启动,或出现coredump等严重问题。
+2. 运行Paddle Serving的系统上安装与实际GPU设备兼容的CUDA driver,并安装与编译期所用的CUDA/CuDNN等版本兼容的基础库。如运行Paddle Serving的系统上安装的CUDA/CuDNN的版本低于编译时所用版本,可能会导致奇怪的cuda函数调用失败等问题。
+
+以下是PaddlePaddle发布版本所使用的基础库版本匹配关系,供参考:
+
+| | CUDA | CuDNN | NCCL2 |
+| :----: | :-----: | :----------------------: | :----: |
+| CUDA 8 | 8.0.61 | CuDNN 7.1.2 for CUDA 8.0 | 2.1.4 |
+| CUDA 9 | 9.0.176 | CuDNN 7.3.1 for CUDA 9.0 | 2.2.12 |
+
+### 如何让Paddle Serving编译系统探测到CuDNN库
+
+从NVIDIA developer官网下载对应版本CuDNN并在本地解压后,在cmake编译命令中增加`-DCUDNN_ROOT`参数,指定CuDNN库所在路径。
+
+### 如何让Paddle Serving编译系统探测到nccl库
+
+从NVIDIA developer官网下载对应版本nccl2库并解压后,增加如下环境变量 (以nccl2.1.4为例):
+
+```shell
+export C_INCLUDE_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/include:$C_INCLUDE_PATH
+export CPLUS_INCLUDE_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/include:$CPLUS_INCLUDE_PATH
+export LD_LIBRARY_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/lib/:$LD_LIBRARY_PATH
+```
diff --git a/doc/DESIGN.md b/doc/DESIGN.md
index 34983801759c8e2d25fb336decbc5828687e4211..5d00d02171dccf07bfdafb9cdd85222a92c20113 100644
--- a/doc/DESIGN.md
+++ b/doc/DESIGN.md
@@ -1,45 +1,47 @@
-# Paddle Serving设计方案
+# Paddle Serving Design
-## 1. 项目背景
+([简体中文](./DESIGN_CN.md)|English)
-PaddlePaddle是公司开源的机器学习框架,广泛支持各种深度学习模型的定制化开发; Paddle serving是Paddle的在线预测部分,与Paddle模型训练环节无缝衔接,提供机器学习预测云服务。本文将从模型、服务、接入等层面,自底向上描述Paddle Serving设计方案。
+## 1. Background
-1. 模型是Paddle Serving预测的核心,包括模型数据和推理计算的管理;
-2. 预测框架封装模型推理计算,对外提供RPC接口,对接不同上游;
-3. 预测服务SDK提供一套接入框架
+PaddlePaddle is the Baidu's open source machine learning framework, which supports a wide range of customized development of deep learning models; Paddle serving is the online prediction framework of Paddle, which seamlessly connects with Paddle model training, and provides cloud services for machine learning prediction. This article will describe the Paddle Serving design from the bottom up, from the model, service, and access levels.
-最终形成一套完整的serving解决方案。
+1. The model is the core of Paddle Serving prediction, including the management of model data and inference calculations;
+2. Prediction framework encapsulation model for inference calculations, providing external RPC interface to connect different upstream
+3. The prediction service SDK provides a set of access frameworks
-## 2. 名词解释
+The result is a complete serving solution.
-- baidu-rpc 百度官方开源RPC框架,支持多种常见通信协议,提供基于protobuf的自定义接口体验
-- Variant Paddle Serving架构对一个最小预测集群的抽象,其特点是内部所有实例(副本)完全同质,逻辑上对应一个model的一个固定版本
-- Endpoint 多个Variant组成一个Endpoint,逻辑上看,Endpoint代表一个model,Endpoint内部的Variant代表不同的版本
-- OP PaddlePaddle用来封装一种数值计算的算子,Paddle Serving用来表示一种基础的业务操作算子,核心接口是inference。OP通过配置其依赖的上游OP,将多个OP串联成一个workflow
-- Channel 一个OP所有请求级中间数据的抽象;OP之间通过Channel进行数据交互
-- Bus 对一个线程中所有channel的管理,以及根据DAG之间的DAG依赖图对OP和Channel两个集合间的访问关系进行调度
-- Stage Workflow按照DAG描述的拓扑图中,属于同一个环节且可并行执行的OP集合
-- Node 由某个Op算子类结合参数配置组成的Op算子实例,也是Workflow中的一个执行单元
-- Workflow 按照DAG描述的拓扑,有序执行每个OP的inference接口
-- DAG/Workflow 由若干个相互依赖的Node组成,每个Node均可通过特定接口获得Request对象,节点Op通过依赖关系获得其前置Op的输出对象,最后一个Node的输出默认就是Response对象
-- Service 对一次pv的请求封装,可配置若干条Workflow,彼此之间复用当前PV的Request对象,然后各自并行/串行执行,最后将Response写入对应的输出slot中;一个Paddle-serving进程可配置多套Service接口,上游根据ServiceName决定当前访问的Service接口。
+## 2. Terms explanation
-## 3. Python Interface设计
+- **baidu-rpc**: Baidu's official open source RPC framework, supports multiple common communication protocols, and provides a custom interface experience based on protobuf
+- **Variant**: Paddle Serving architecture is an abstraction of a minimal prediction cluster, which is characterized by all internal instances (replicas) being completely homogeneous and logically corresponding to a fixed version of a model
+- **Endpoint**: Multiple Variants form an Endpoint. Logically, Endpoint represents a model, and Variants within the Endpoint represent different versions.
+- **OP**: PaddlePaddle is used to encapsulate a numerical calculation operator, Paddle Serving is used to represent a basic business operation operator, and the core interface is inference. OP configures its dependent upstream OP to connect multiple OPs into a workflow
+- **Channel**: An abstraction of all request-level intermediate data of the OP; data exchange between OPs through Channels
+- **Bus**: manages all channels in a thread, and schedules the access relationship between the two sets of OP and Channel according to the DAG dependency graph between DAGs
+- **Stage**: Workflow according to the topology diagram described by DAG, a collection of OPs that belong to the same link and can be executed in parallel
+- **Node**: An OP operator instance composed of an OP operator class combined with parameter configuration, which is also an execution unit in Workflow
+- **Workflow**: executes the inference interface of each OP in order according to the topology described by DAG
+- **DAG/Workflow**: consists of several interdependent Nodes. Each Node can obtain the Request object through a specific interface. The node Op obtains the output object of its pre-op through the dependency relationship. The output of the last Node is the Response object by default.
+- **Service**: encapsulates a pv request, can configure several Workflows, reuse the current PV's Request object with each other, and then execute each in parallel/serial execution, and finally write the Response to the corresponding output slot; a Paddle-serving process Multiple sets of Service interfaces can be configured. The upstream determines the Service interface currently accessed based on the ServiceName.
-### 3.1 核心目标:
+## 3. Python Interface Design
-一套Paddle Serving的动态库,支持Paddle保存的通用模型的远程预估服务,通过Python Interface调用PaddleServing底层的各种功能。
+### 3.1 Core Targets:
-### 3.2 通用模型:
+A set of Paddle Serving dynamic library, support the remote estimation service of the common model saved by Paddle, and call the various underlying functions of PaddleServing through the Python Interface.
-能够使用Paddle Inference Library进行预测的模型,在训练过程中保存的模型,包含Feed Variable和Fetch Variable
+### 3.2 General Model:
-### 3.3 整体设计:
+Models that can be predicted using the Paddle Inference Library, models saved during training, including Feed Variable and Fetch Variable
-用户通过Python Client启动Client和Server,Python API有检查互联和待访问模型是否匹配的功能
-Python API背后调用的是Paddle Serving实现的client和server对应功能的pybind,互传的信息通过RPC实现
-Client Python API当前有两个简单的功能,load_inference_conf和predict,分别用来执行加载待预测的模型和预测
-Server Python API主要负责加载预估模型,以及生成Paddle Serving需要的各种配置,包括engines,workflow,resource等
+### 3.3 Overall design:
+
+- The user starts the Client and Server through the Python Client. The Python API has a function to check whether the interconnection and the models to be accessed match.
+- The Python API calls the pybind corresponding to the client and server functions implemented by Paddle Serving, and the information transmitted through RPC is implemented through RPC.
+- The Client Python API currently has two simple functions, load_inference_conf and predict, which are used to perform loading of the model to be predicted and prediction, respectively.
+- The Server Python API is mainly responsible for loading the inference model and generating various configurations required by Paddle Serving, including engines, workflow, resources, etc.
### 3.4 Server Inferface
@@ -49,10 +51,10 @@ Server Python API主要负责加载预估模型,以及生成Paddle Serving需
-### 3.6 训练过程中使用的Client io
+### 3.6 Client io used during Training
-PaddleServing设计可以在训练过程中使用的保存模型接口,与Paddle保存inference model的接口基本一致,feed_var_dict与fetch_var_dict
-可以为输入和输出变量起别名,serving启动需要读取的配置会保存在client端和server端的保存目录中。
+PaddleServing is designed to saves the model interface that can be used during the training process, which is basically the same as the Paddle save inference model interface, feed_var_dict and fetch_var_dict
+You can alias the input and output variables. The configuration that needs to be read when the serving starts is saved in the client and server storage directories.
``` python
def save_model(server_model_folder,
@@ -62,29 +64,29 @@ def save_model(server_model_folder,
main_program=None)
```
-## 4. Paddle Serving底层框架
+## 4. Paddle Serving Underlying Framework
-
+
-**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
-**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
-**PredictService**:对外部提供的预测服务接口封装。通过protobuf定义与客户端的通信字段。
+**Model Management Framework**: Connects model files of multiple machine learning platforms and provides a unified inference interface
+**Business Scheduling Framework**: Abstracts the calculation logic of various different inference models, provides a general DAG scheduling framework, and connects different operators through DAG diagrams to complete a prediction service together. This abstract model allows users to conveniently implement their own calculation logic, and at the same time facilitates operator sharing. (Users build their own forecasting services. A large part of their work is to build DAGs and provide operators.)
+**Predict Service**: Encapsulation of the externally provided prediction service interface. Define communication fields with the client through protobuf.
-### 4.1 模型管理框架
+### 4.1 Model Management Framework
-模型管理框架负责管理机器学习框架训练出来的模型,总体可抽象成模型加载、模型数据和模型推理等3个层次。
+The model management framework is responsible for managing the models trained by the machine learning framework. It can be abstracted into three levels: model loading, model data, and model reasoning.
-#### 模型加载
+#### Model Loading
-将模型从磁盘加载到内存,支持多版本、热加载、增量更新等功能
+Load model from disk to memory, support multi-version, hot-load, incremental update, etc.
-#### 模型数据
+#### Model data
-模型在内存中的数据结构,集成fluid预测lib
+Model data structure in memory, integrated fluid inference lib
#### inferencer
-向上为预测服务提供统一的预测接口
+it provided united inference interface for upper layers
```C++
class FluidFamilyCore {
@@ -94,54 +96,54 @@ class FluidFamilyCore {
};
```
-### 4.2 业务调度框架
+### 4.2 Business Scheduling Framework
-#### 4.2.1 预测服务Service
+#### 4.2.1 Inference Service
-参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
+With reference to the abstract idea of model calculation of the TensorFlow framework, the business logic is abstracted into a DAG diagram, driven by configuration, generating a workflow, and skipping C ++ code compilation. Each specific step of the service corresponds to a specific OP. The OP can configure the upstream OP that it depends on. Unified message passing between OPs is achieved by the thread-level bus and channel mechanisms. For example, the service process of a simple prediction service can be abstracted into 3 steps including reading request data-> calling the prediction interface-> writing back the prediction result, and correspondingly implemented to 3 OP: ReaderOp-> ClassifyOp-> WriteOp
-
+
-关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](CREATING.md)的相关章节
+Regarding the dependencies between OPs, and the establishment of workflows through OPs, you can refer to [从零开始写一个预测服务](./deprecated/CREATING.md) (simplified Chinese Version)
-服务端实例透视图
+Server instance perspective
-
+
-#### 4.2.2 Paddle Serving的多服务机制
+#### 4.2.2 Paddle Serving Multi-Service Mechanism
-
+
-Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
+Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
-#### 4.2.3 业务调度层级关系
+#### 4.2.3 Hierarchical relationship of business scheduling
-从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
+From the client's perspective, a Paddle Serving service can be divided into three levels: Service, Endpoint, and Variant from top to bottom.
-
+
-一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
-同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
+One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
+The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) section 3.2).
-
+
-## 5. 用户接口
+## 5. User Interface
-在满足一定的接口规范前提下,服务框架不对用户数据字段做任何约束,以应对各种预测服务的不同业务接口。Baidu-rpc继承了Protobuf serice的接口,用户按照Protobuf语法规范描述Request和Response业务接口。Paddle Serving基于Baidu-rpc框架搭建,默认支持该特性。
+Under the premise of meeting certain interface specifications, the service framework does not make any restrictions on user data fields to meet different business interfaces of various forecast services. Baidu-rpc inherits the interface of Protobuf serice, and the user describes the Request and Response business interfaces according to the Protobuf syntax specification. Paddle Serving is built on the Baidu-rpc framework and supports this feature by default.
-无论通信协议如何变化,框架只需确保Client和Server间通信协议和业务数据两种信息的格式同步,即可保证正常通信。这些信息又可细分如下:
+No matter how the communication protocol changes, the framework only needs to ensure that the communication protocol between the client and server and the format of the business data are synchronized to ensure normal communication. This information can be broken down as follows:
-- 协议:Server和Client之间事先约定的、确保相互识别数据格式的包头信息。Paddle Serving用Protobuf作为基础通信格式
-- 数据:用来描述Request和Response的接口,例如待预测样本数据,和预测返回的打分。包括:
- - 数据字段:请求包Request和返回包Response两种数据结构包含的字段定义
- - 描述接口:跟协议接口类似,默认支持Protobuf
+-Protocol: Header information agreed in advance between Server and Client to ensure mutual recognition of data format. Paddle Serving uses Protobuf as the basic communication format
+-Data: Used to describe the interface of Request and Response, such as the sample data to be predicted, and the score returned by the prediction. include:
+ -Data fields: Field definitions included in the two data structures of Request and Return.
+ -Description interface: similar to the protocol interface, it supports Protobuf by default
-### 5.1 数据压缩方法
+### 5.1 Data Compression Method
-Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
+Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
-### 5.2 C++ SDK API接口
+### 5.2 C ++ SDK API Interface
```C++
class PredictorApi {
@@ -176,7 +178,7 @@ class Predictor {
```
-### 5.3 OP相关接口
+### 5.3 Inferfaces related to Op
```C++
class Op {
@@ -258,7 +260,7 @@ class Op {
```
-### 5.4 框架相关接口
+### 5.4 Interfaces related to framework
Service
diff --git a/doc/DESIGN_CN.md b/doc/DESIGN_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..124e826c4591c89cb14d25153f4c9a3096ea8dfb
--- /dev/null
+++ b/doc/DESIGN_CN.md
@@ -0,0 +1,377 @@
+# Paddle Serving设计方案
+
+(简体中文|[English](./DESIGN.md))
+
+## 1. 项目背景
+
+PaddlePaddle是百度开源的机器学习框架,广泛支持各种深度学习模型的定制化开发; Paddle Serving是Paddle的在线预测部分,与Paddle模型训练环节无缝衔接,提供机器学习预测云服务。本文将从模型、服务、接入等层面,自底向上描述Paddle Serving设计方案。
+
+1. 模型是Paddle Serving预测的核心,包括模型数据和推理计算的管理;
+2. 预测框架封装模型推理计算,对外提供RPC接口,对接不同上游;
+3. 预测服务SDK提供一套接入框架
+
+最终形成一套完整的serving解决方案。
+
+## 2. 名词解释
+
+- **baidu-rpc**: 百度官方开源RPC框架,支持多种常见通信协议,提供基于protobuf的自定义接口体验
+- **Variant**: Paddle Serving架构对一个最小预测集群的抽象,其特点是内部所有实例(副本)完全同质,逻辑上对应一个model的一个固定版本
+- **Endpoint**: 多个Variant组成一个Endpoint,逻辑上看,Endpoint代表一个model,Endpoint内部的Variant代表不同的版本
+- **OP**: PaddlePaddle用来封装一种数值计算的算子,Paddle Serving用来表示一种基础的业务操作算子,核心接口是inference。OP通过配置其依赖的上游OP,将多个OP串联成一个workflow
+- **Channel**: 一个OP所有请求级中间数据的抽象;OP之间通过Channel进行数据交互
+- **Bus**: 对一个线程中所有channel的管理,以及根据DAG之间的DAG依赖图对OP和Channel两个集合间的访问关系进行调度
+- **Stage**: Workflow按照DAG描述的拓扑图中,属于同一个环节且可并行执行的OP集合
+- **Node**: 由某个OP算子类结合参数配置组成的OP算子实例,也是Workflow中的一个执行单元
+- **Workflow**: 按照DAG描述的拓扑,有序执行每个OP的inference接口
+- **DAG/Workflow**: 由若干个相互依赖的Node组成,每个Node均可通过特定接口获得Request对象,节点OP通过依赖关系获得其前置OP的输出对象,最后一个Node的输出默认就是Response对象
+- **Service**: 对一次PV的请求封装,可配置若干条Workflow,彼此之间复用当前PV的Request对象,然后各自并行/串行执行,最后将Response写入对应的输出slot中;一个Paddle-serving进程可配置多套Service接口,上游根据ServiceName决定当前访问的Service接口。
+
+## 3. Python Interface设计
+
+### 3.1 核心目标:
+
+完成一整套Paddle Serving的动态库,支持Paddle保存的通用模型的远程预估服务,通过Python Interface调用PaddleServing底层的各种功能。
+
+### 3.2 通用模型:
+
+能够使用Paddle Inference Library进行预测的模型,在训练过程中保存的模型,包含Feed Variable和Fetch Variable
+
+### 3.3 整体设计:
+
+- 用户通过Python Client启动Client和Server,Python API有检查互联和待访问模型是否匹配的功能
+- Python API背后调用的是Paddle Serving实现的client和server对应功能的pybind,互传的信息通过RPC实现
+- Client Python API当前有两个简单的功能,load_inference_conf和predict,分别用来执行加载待预测的模型和预测
+- Server Python API主要负责加载预估模型,以及生成Paddle Serving需要的各种配置,包括engines,workflow,resource等
+
+### 3.4 Server Inferface
+
+
+
+### 3.5 Client Interface
+
+
+
+### 3.6 训练过程中使用的Client io
+
+PaddleServing设计可以在训练过程中使用的保存模型接口,与Paddle保存inference model的接口基本一致,feed_var_dict与fetch_var_dict
+可以为输入和输出变量起别名,serving启动需要读取的配置会保存在client端和server端的保存目录中。
+
+``` python
+def save_model(server_model_folder,
+ client_config_folder,
+ feed_var_dict,
+ fetch_var_dict,
+ main_program=None)
+```
+
+## 4. Paddle Serving底层框架
+
+
+
+**模型管理框架**:对接多种机器学习平台的模型文件,向上提供统一的inference接口
+**业务调度框架**:对各种不同预测模型的计算逻辑进行抽象,提供通用的DAG调度框架,通过DAG图串联不同的算子,共同完成一次预测服务。该抽象模型使用户可以方便的实现自己的计算逻辑,同时便于算子共用。(用户搭建自己的预测服务,很大一部分工作是搭建DAG和提供算子的实现)
+**PredictService**:对外部提供的预测服务接口封装。通过protobuf定义与客户端的通信字段。
+
+### 4.1 模型管理框架
+
+模型管理框架负责管理机器学习框架训练出来的模型,总体可抽象成模型加载、模型数据和模型推理等3个层次。
+
+#### 模型加载
+
+将模型从磁盘加载到内存,支持多版本、热加载、增量更新等功能
+
+#### 模型数据
+
+模型在内存中的数据结构,集成fluid预测lib
+
+#### inferencer
+
+向上为预测服务提供统一的预测接口
+
+```C++
+class FluidFamilyCore {
+ virtual bool Run(const void* in_data, void* out_data);
+ virtual int create(const std::string& data_path);
+ virtual int clone(void* origin_core);
+};
+```
+
+### 4.2 业务调度框架
+
+#### 4.2.1 预测服务Service
+
+参考TF框架的模型计算的抽象思想,将业务逻辑抽象成DAG图,由配置驱动,生成workflow,跳过C++代码编译。业务的每个具体步骤,对应一个具体的OP,OP可配置自己依赖的上游OP。OP之间消息传递统一由线程级Bus和channel机制实现。例如,一个简单的预测服务的服务过程,可以抽象成读请求数据->调用预测接口->写回预测结果等3个步骤,相应的实现到3个OP: ReaderOp->ClassifyOp->WriteOp
+
+
+
+关于OP之间的依赖关系,以及通过OP组建workflow,可以参考[从零开始写一个预测服务](CREATING.md)的相关章节
+
+服务端实例透视图
+
+
+
+
+#### 4.2.2 Paddle Serving的多服务机制
+
+
+
+Paddle Serving实例可以同时加载多个模型,每个模型用一个Service(以及其所配置的workflow)承接服务。可以参考[Demo例子中的service配置文件](../tools/cpp_examples/demo-serving/conf/service.prototxt)了解如何为serving实例配置多个service
+
+#### 4.2.3 业务调度层级关系
+
+从客户端看,一个Paddle Serving service从顶向下可分为Service, Endpoint, Variant等3个层级
+
+
+
+一个Service对应一个预测模型,模型下有1个endpoint。模型的不同版本,通过endpoint下多个variant概念实现:
+同一个模型预测服务,可以配置多个variant,每个variant有自己的下游IP列表。客户端代码可以对各个variant配置相对权重,以达到调节流量比例的关系(参考[客户端配置](./deprecated/CLIENT_CONFIGURE.md)第3.2节中关于variant_weight_list的说明)。
+
+
+
+## 5. 用户接口
+
+在满足一定的接口规范前提下,服务框架不对用户数据字段做任何约束,以应对各种预测服务的不同业务接口。Baidu-rpc继承了Protobuf serice的接口,用户按照Protobuf语法规范描述Request和Response业务接口。Paddle Serving基于Baidu-rpc框架搭建,默认支持该特性。
+
+无论通信协议如何变化,框架只需确保Client和Server间通信协议和业务数据两种信息的格式同步,即可保证正常通信。这些信息又可细分如下:
+
+- 协议:Server和Client之间事先约定的、确保相互识别数据格式的包头信息。Paddle Serving用Protobuf作为基础通信格式
+- 数据:用来描述Request和Response的接口,例如待预测样本数据,和预测返回的打分。包括:
+ - 数据字段:请求包Request和返回包Response两种数据结构包含的字段定义
+ - 描述接口:跟协议接口类似,默认支持Protobuf
+
+### 5.1 数据压缩方法
+
+Baidu-rpc内置了snappy, gzip, zlib等数据压缩方法,可在配置文件中配置(参考[客户端配置](./deprecated/CLIENT_CONFIGURE.md)第3.1节关于compress_type的介绍)
+
+### 5.2 C++ SDK API接口
+
+```C++
+class PredictorApi {
+ public:
+ int create(const char* path, const char* file);
+ int thrd_initialize();
+ int thrd_clear();
+ int thrd_finalize();
+ void destroy();
+
+ Predictor* fetch_predictor(std::string ep_name);
+ int free_predictor(Predictor* predictor);
+};
+
+class Predictor {
+ public:
+ // synchronize interface
+ virtual int inference(google::protobuf::Message* req,
+ google::protobuf::Message* res) = 0;
+
+ // asynchronize interface
+ virtual int inference(google::protobuf::Message* req,
+ google::protobuf::Message* res,
+ DoneType done,
+ brpc::CallId* cid = NULL) = 0;
+
+ // synchronize interface
+ virtual int debug(google::protobuf::Message* req,
+ google::protobuf::Message* res,
+ butil::IOBufBuilder* debug_os) = 0;
+};
+
+```
+
+### 5.3 OP相关接口
+
+```C++
+class Op {
+ // ------Getters for Channel/Data/Message of dependent OP-----
+
+ // Get the Channel object of dependent OP
+ Channel* mutable_depend_channel(const std::string& op);
+
+ // Get the Channel object of dependent OP
+ const Channel* get_depend_channel(const std::string& op) const;
+
+ template
-### 2.4 微服务插件模式 -由于Paddle Serving底层采用基于C++的通信组件,并且核心框架也是基于C/C++编写,当用户想要在服务端定义复杂的前处理与后处理逻辑时,一种办法是修改Paddle Serving底层框架,重新编译源码。另一种方式可以通过在服务端嵌入轻量级的Web服务,通过在Web服务中实现更复杂的预处理逻辑,从而搭建一套逻辑完整的服务。当访问量超过了Web服务能够接受的范围,开发者有足够的理由开发一些高性能的C++预处理逻辑,并嵌入到Serving的原生服务库中。Web服务和RPC服务的关系以及他们的组合方式可以参考下文`用户类型`中的说明。 +### 2.4 Micro service plugin +The underlying communication of Paddle Serving is implemented with C++ as well as the core framework, it is hard for users who do not familiar with C++ to implement new Paddle Serving Server Operators. Another approach is to use the light-weighted Web Service in Paddle Serving Server that can be viewed as a plugin. A user can implement complex data preprocessing and postprocessing logics to build a complex AI service. If access of the AI service has a large volumn, it is worth to implement the service with high performance Paddle Serving Server operators. The relationship between Web Service and RPC Service can be referenced in `User Type`. -## 3. 工业级特性 +## 3. Industrial Features -### 3.1 分布式稀疏参数索引 +### 3.1 Distributed Sparse Parameter Indexing -分布式稀疏参数索引通常在广告推荐中出现,并与分布式训练配合形成完整的离线-在线一体化部署。下图解释了其中的流程,产品的在线服务接受用户请求后将请求发送给预估服务,同时系统会记录用户的请求以进行相应的训练日志处理和拼接。离线分布式训练系统会针对流式产出的训练日志进行模型增量训练,而增量产生的模型会配送至分布式稀疏参数索引服务,同时对应的稠密的模型参数也会配送至在线的预估服务。在线服务由两部分组成,一部分是针对用户的请求提取特征后,将需要进行模型的稀疏参数索引的特征发送请求给分布式稀疏参数索引服务,针对分布式稀疏参数索引服务返回的稀疏参数再进行后续深度学习模型的计算流程,从而完成预估。 +Distributed Sparse Parameter Indexing is commonly seen in advertising and recommendation scenarios, and is often used coupled with distributed training. The figure below explains a commonly seen architecture for online recommendation. When the recommendation service receives a request from a user, the system will automatically collects training log for the offline distributed online training. Mean while, the request is sent to Paddle Serving Server. For sparse features, distributed sparse parameter index service is called so that sparse parameters can be looked up. The dense input features together with the looked up sparse model parameters are fed into the Paddle Inference Node of the DAG in Paddle Serving Server. Then the score can be responsed through RPC to product service for item ranking.
- -为什么要使用Paddle Serving提供的分布式稀疏参数索引服务?1)在一些推荐场景中,模型的输入特征规模通常可以达到上千亿,单台机器无法支撑T级别模型在内存的保存,因此需要进行分布式存储。2)Paddle Serving提供的分布式稀疏参数索引服务,具有并发请求多个节点的能力,从而以较低的延时完成预估服务。 + +Why do we need to support distributed sparse parameter indexing in Paddle Serving? 1) In some recommendation scenarios, the number of features can be up to hundreds of billions that a single node can not hold the parameters within random access memory. 2) Paddle Serving supports distributed sparse parameter indexing that can couple with paddle inference. Users do not need to do extra work to have a low latency inference engine with hundreds of billions of parameters. -### 3.2 模型管理、在线A/B流量测试、模型热加载 +### 3.2 Model Management, online A/B test, Model Online Reloading -Paddle Serving的C++引擎支持模型管理、在线A/B流量测试、模型热加载等功能,当前在Python API还有没完全开放这部分功能的配置,敬请期待。 +Paddle Serving's C++ engine supports model management, online A/B test and model online reloading. Currently, python API is not released yet, please wait for the next release. -## 4. 用户类型 -Paddle Serving面向的用户提供RPC和HTTP两种访问协议。对于HTTP协议,我们更倾向于流量中小型的服务使用,并且对延时没有严格要求的AI服务开发者。对于RPC协议,我们面向流量较大,对延时要求更高的用户,此外RPC的客户端可能也处在一个大系统的服务中,这种情况下非常适合使用Paddle Serving提供的RPC服务。对于使用分布式稀疏参数索引服务而言,Paddle Serving的用户不需要关心底层的细节,其调用本质也是通过RPC服务再调用RPC服务。下图给出了当前设计的Paddle Serving可能会使用Serving服务的几种场景。 +## 4. User Types +Paddle Serving provides RPC and HTTP protocol for users. For HTTP service, we recommend users with median or small traffic services to use, and the latency is not a strict requirement. For RPC protocol, we recommend high traffic services and low latency required services to use. For users who use distributed sparse parameter indexing built-in service, it is not necessary to care about the underlying details of communication. The following figure gives out several scenarios that user may want to use Paddle Serving.
@@ -173,11 +180,11 @@ Paddle Serving面向的用户提供RPC和HTTP两种访问协议。对于HTTP协
-对于普通的模型而言(具体指通过Serving提供的IO保存的模型,并且没有对模型进行后处理),用户使用RPC服务不需要额外的开发即可实现服务启动,但需要开发一些Client端的代码来使用服务。对于Web服务的开发,需要用户现在Paddle Serving提供的Web Service框架中进行前后处理的开发,从而实现整个HTTP服务。 +For servable models saved from Paddle Serving IO API, users do not need to do extra coding work to startup a service, but may need some coding work on the client side. For development of Web Service plugin, a user needs to provide implementation of Web Service's preprocessing and postprocessing work if needed to get a HTTP service. -### 4.1 Web服务开发 +### 4.1 Web Service Development -Web服务有很多开源的框架,Paddle Serving当前集成了Flask框架,但这部分对用户不可见,在未来可能会提供性能更好的Web框架作为底层HTTP服务集成引擎。用户需要继承WebService,从而实现对rpc服务的输入输出进行加工的目的。 +Web Service has lots of open sourced framework. Currently Paddle Serving uses Flask as built-in service framework, and users are not aware of this. More efficient web service will be integrated in the furture if needed. ``` python from paddle_serving_server.web_service import WebService @@ -208,15 +215,15 @@ imdb_service.prepare_dict({"dict_file_path": sys.argv[4]}) imdb_service.run_server() ``` -`WebService`作为基类,提供将用户接受的HTTP请求转化为RPC输入的接口`preprocess`,同时提供对RPC请求返回的结果进行后处理的接口`postprocess`,继承`WebService`的子类,可以定义各种类型的成员函数。`WebService`的启动命令和普通RPC服务提供的启动API一致。 +`WebService` is a Base Class, providing inheritable interfaces such `preprocess` and `postprocess` for users to implement. In the inherited class of `WebService` class, users can define any functions they want and the startup function interface is the same as RPC service. -## 5. 未来计划 +## 5. Future Plan -### 5.1 有向无环图结构定义开放 -当前版本开放的python API仅支持用户定义Sequential类型的执行流,如果想要进行Server进程内复杂的计算,需要增加对应的用户API。 +### 5.1 Open DAG definition API +Current version of Paddle Serving Server supports sequential type of execution flow. DAG definition API can be more helpful to users on complex tasks. -### 5.2 云端自动部署能力 -为了方便用户更容易将Paddle的预测模型部署到线上,Paddle Serving在接下来的版本会提供Kubernetes生态下任务编排的工具。 +### 5.2 Auto Deployment on Cloud +In order to make deployment more easily on public cloud, Paddle Serving considers to provides Operators on Kubernetes in submitting a service job. -### 5.3 向量检索、树结构检索 -在推荐与广告场景的召回系统中,通常需要采用基于向量的快速检索或者基于树结构的快速检索,Paddle Serving会对这方面的检索引擎进行集成或扩展。 +### 5.3 Vector Indexing and Tree based Indexing +In recommendation and advertisement systems, it is commonly seen to use vector based index or tree based indexing service to do candidate retrievals. These retrieval tasks will be built-in services of Paddle Serving. diff --git a/doc/DESIGN_DOC_CN.md b/doc/DESIGN_DOC_CN.md new file mode 100644 index 0000000000000000000000000000000000000000..2a63d56593dc47a5ca69f9c5c324710ee6dc3fc6 --- /dev/null +++ b/doc/DESIGN_DOC_CN.md @@ -0,0 +1,224 @@ +# Paddle Serving设计文档 + +(简体中文|[English](./DESIGN_DOC.md)) + +## 1. 整体设计目标 + +- 长期使命:Paddle Serving是一个PaddlePaddle开源的在线服务框架,长期目标就是围绕着人工智能落地的最后一公里提供越来越专业、可靠、易用的服务。 + +- 工业级:为了达到工业级深度学习模型在线部署的要求, +Paddle Serving提供很多大规模场景需要的部署功能:1)分布式稀疏参数索引功能;2)高并发底层通信能力;3)模型管理、在线A/B流量测试、模型热加载。 + +- 简单易用:为了让使用Paddle的用户能够以极低的成本部署模型,PaddleServing设计了一套与Paddle训练框架无缝打通的预测部署API,普通模型可以使用一行命令进行服务部署。 + +- 功能扩展:当前,Paddle Serving支持C++、Python、Golang的客户端,未来也会面向不同类型的客户新增多种语言的客户端。在Paddle Serving的框架设计方面,尽管当前Paddle Serving以支持Paddle模型的部署为核心功能, +用户可以很容易嵌入其他的机器学习库部署在线预测。 + +## 2. 模块设计与实现 + +### 2.1 Python API接口设计 + +#### 2.1.1 训练模型的保存 +Paddle的模型预测需要重点关注的内容:1)模型的输入变量;2)模型的输出变量;3)模型结构和模型参数。Paddle Serving Python API提供用户可以在训练过程中保存模型的接口,并将Paddle Serving在部署阶段需要保存的配置打包保存,一个示例如下: +``` python +import paddle_serving_client.io as serving_io +serving_io.save_model("serving_model", "client_conf", + {"words": data}, {"prediction": prediction}, + fluid.default_main_program()) +``` +代码示例中,`{"words": data}`和`{"prediction": prediction}`分别指定了模型的输入和输出,`"words"`和`"prediction"`是输出和输出变量的别名,设计别名的目的是为了使开发者能够记忆自己训练模型的输入输出对应的字段。`data`和`prediction`则是Paddle训练过程中的`[Variable](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Variable_cn.html#variable)`,通常代表张量([Tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Tensor_cn.html#tensor))或变长张量([LodTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/lod_tensor.html#lodtensor))。调用保存命令后,会按照用户指定的`"serving_model"`和`"client_conf"`生成两个目录,内容如下: +``` shell +. +├── client_conf +│ ├── serving_client_conf.prototxt +│ └── serving_client_conf.stream.prototxt +└── serving_model + ├── embedding_0.w_0 + ├── fc_0.b_0 + ├── fc_0.w_0 + ├── fc_1.b_0 + ├── fc_1.w_0 + ├── fc_2.b_0 + ├── fc_2.w_0 + ├── lstm_0.b_0 + ├── lstm_0.w_0 + ├── __model__ + ├── serving_server_conf.prototxt + └── serving_server_conf.stream.prototxt +``` +其中,`"serving_client_conf.prototxt"`和`"serving_server_conf.prototxt"`是Paddle Serving的Client和Server端需要加载的配置,`"serving_client_conf.stream.prototxt"`和`"serving_server_conf.stream.prototxt"`是配置文件的二进制形式。`"serving_model"`下保存的其他内容和Paddle保存的模型文件是一致的。我们会考虑未来在Paddle框架中直接保存可服务的配置,实现配置保存对用户无感。 + +#### 2.1.2 服务端模型加载 + +服务端的预测逻辑可以通过Paddle Serving Server端的API进行人工定义,一个例子: +``` python +import paddle_serving_server as serving +op_maker = serving.OpMaker() +read_op = op_maker.create('general_reader') +dist_kv_op = op_maker.create('general_dist_kv') +general_infer_op = op_maker.create('general_infer') +general_response_op = op_maker.create('general_response') + +op_seq_maker = serving.OpSeqMaker() +op_seq_maker.add_op(read_op) +op_seq_maker.add_op(dist_kv_op) +op_seq_maker.add_op(general_infer_op) +op_seq_maker.add_op(general_response_op) +``` + +当前Paddle Serving在Server端支持的主要Op请参考如下列表: + +
+
+
+
+
+ +### 2.4 微服务插件模式 +由于Paddle Serving底层采用基于C++的通信组件,并且核心框架也是基于C/C++编写,当用户想要在服务端定义复杂的前处理与后处理逻辑时,一种办法是修改Paddle Serving底层框架,重新编译源码。另一种方式可以通过在服务端嵌入轻量级的Web服务,通过在Web服务中实现更复杂的预处理逻辑,从而搭建一套逻辑完整的服务。当访问量超过了Web服务能够接受的范围,开发者有足够的理由开发一些高性能的C++预处理逻辑,并嵌入到Serving的原生服务库中。Web服务和RPC服务的关系以及他们的组合方式可以参考下文`用户类型`中的说明。 + +## 3. 工业级特性 + +### 3.1 分布式稀疏参数索引 + +分布式稀疏参数索引通常在广告推荐中出现,并与分布式训练配合形成完整的离线-在线一体化部署。下图解释了其中的流程,产品的在线服务接受用户请求后将请求发送给预估服务,同时系统会记录用户的请求以进行相应的训练日志处理和拼接。离线分布式训练系统会针对流式产出的训练日志进行模型增量训练,而增量产生的模型会配送至分布式稀疏参数索引服务,同时对应的稠密的模型参数也会配送至在线的预估服务。在线服务由两部分组成,一部分是针对用户的请求提取特征后,将需要进行模型的稀疏参数索引的特征发送请求给分布式稀疏参数索引服务,针对分布式稀疏参数索引服务返回的稀疏参数再进行后续深度学习模型的计算流程,从而完成预估。 + +
+
+
+
+
+ +为什么要使用Paddle Serving提供的分布式稀疏参数索引服务?1)在一些推荐场景中,模型的输入特征规模通常可以达到上千亿,单台机器无法支撑T级别模型在内存的保存,因此需要进行分布式存储。2)Paddle Serving提供的分布式稀疏参数索引服务,具有并发请求多个节点的能力,从而以较低的延时完成预估服务。 + +### 3.2 模型管理、在线A/B流量测试、模型热加载 + +Paddle Serving的C++引擎支持模型管理、在线A/B流量测试、模型热加载等功能,当前在Python API还有没完全开放这部分功能的配置,敬请期待。 + +## 4. 用户类型 +Paddle Serving面向的用户提供RPC和HTTP两种访问协议。对于HTTP协议,我们更倾向于流量中小型的服务使用,并且对延时没有严格要求的AI服务开发者。对于RPC协议,我们面向流量较大,对延时要求更高的用户,此外RPC的客户端可能也处在一个大系统的服务中,这种情况下非常适合使用Paddle Serving提供的RPC服务。对于使用分布式稀疏参数索引服务而言,Paddle Serving的用户不需要关心底层的细节,其调用本质也是通过RPC服务再调用RPC服务。下图给出了当前设计的Paddle Serving可能会使用Serving服务的几种场景。 + +
+
+
+
+
+ +对于普通的模型而言(具体指通过Serving提供的IO保存的模型,并且没有对模型进行后处理),用户使用RPC服务不需要额外的开发即可实现服务启动,但需要开发一些Client端的代码来使用服务。对于Web服务的开发,需要用户现在Paddle Serving提供的Web Service框架中进行前后处理的开发,从而实现整个HTTP服务。 + +### 4.1 Web服务开发 + +Web服务有很多开源的框架,Paddle Serving当前集成了Flask框架,但这部分对用户不可见,在未来可能会提供性能更好的Web框架作为底层HTTP服务集成引擎。用户需要继承WebService,从而实现对rpc服务的输入输出进行加工的目的。 + +``` python +from paddle_serving_server.web_service import WebService +from imdb_reader import IMDBDataset +import sys + + +class IMDBService(WebService): + def prepare_dict(self, args={}): + if len(args) == 0: + exit(-1) + self.dataset = IMDBDataset() + self.dataset.load_resource(args["dict_file_path"]) + + def preprocess(self, feed={}, fetch=[]): + if "words" not in feed: + exit(-1) + res_feed = {} + res_feed["words"] = self.dataset.get_words_only(feed["words"])[0] + return res_feed, fetch + + +imdb_service = IMDBService(name="imdb") +imdb_service.load_model_config(sys.argv[1]) +imdb_service.prepare_server( + workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu") +imdb_service.prepare_dict({"dict_file_path": sys.argv[4]}) +imdb_service.run_server() +``` + +`WebService`作为基类,提供将用户接受的HTTP请求转化为RPC输入的接口`preprocess`,同时提供对RPC请求返回的结果进行后处理的接口`postprocess`,继承`WebService`的子类,可以定义各种类型的成员函数。`WebService`的启动命令和普通RPC服务提供的启动API一致。 + +## 5. 未来计划 + +### 5.1 有向无环图结构定义开放 +当前版本开放的python API仅支持用户定义Sequential类型的执行流,如果想要进行Server进程内复杂的计算,需要增加对应的用户API。 + +### 5.2 云端自动部署能力 +为了方便用户更容易将Paddle的预测模型部署到线上,Paddle Serving在接下来的版本会提供Kubernetes生态下任务编排的工具。 + +### 5.3 向量检索、树结构检索 +在推荐与广告场景的召回系统中,通常需要采用基于向量的快速检索或者基于树结构的快速检索,Paddle Serving会对这方面的检索引擎进行集成或扩展。 diff --git a/doc/DESIGN_DOC_EN.md b/doc/DESIGN_DOC_EN.md deleted file mode 100644 index 2f8a36ea6686b5add2a7e4e407eabfd14167490d..0000000000000000000000000000000000000000 --- a/doc/DESIGN_DOC_EN.md +++ /dev/null @@ -1,227 +0,0 @@ -# Paddle Serving Design Doc - -## 1. Design Objectives - -- Long Term Vision: Online deployment of deep learning models will be a user-facing application in the future. Any AI developer will face the problem of deploying an online service for his or her trained model. -Paddle Serving is the official open source online deployment framework. The long term goal of Paddle Serving is to provide professional, reliable and easy-to-use online service to the last mile of AI application. - -- Easy-To-Use: For algorithmic developers to quickly deploy their models online, Paddle Serving designs APIs that can be used with Paddle's training process seamlessly, most Paddle models can be deployed as a service with one line command. - -- Industrial Oriented: To meet industrial deployment requirements, Paddle Serving supports lots of large-scale deployment functions: 1) Distributed Sparse Embedding Indexing. 2) Highly concurrent underlying communications. 3) Model Management, online A/B test, model online loading. - -- Extensibility: Paddle Serving supports C++, Python and Golang client, and will support more clients with different languages. It is very easy to extend Paddle Serving to support other machine learning inference library, although currently Paddle inference library is the only official supported inference backend. - - -## 2. Module design and implementation - -### 2.1 Python API interface design - -#### 2.1.1 save a servable model -The inference phase of Paddle model focuses on 1) input variables of the model. 2) output variables of the model. 3) model structure and model parameters. Paddle Serving Python API provides a `save_model` interface for trained model, and save necessary information for Paddle Serving to use during deployment phase. An example is as follows: - -``` python -import paddle_serving_client.io as serving_io -serving_io.save_model("serving_model", "client_conf", - {"words": data}, {"prediction": prediction}, - fluid.default_main_program()) -``` -In the example, `{"words": data}` and `{"prediction": prediction}` assign the inputs and outputs of a model. `"words"` and `"prediction"` are alias names of inputs and outputs. The design of alias name is to help developers to memorize model inputs and model outputs. `data` and `prediction` are Paddle `[Variable](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Variable_cn.html#variable)` in training phase that often represents ([Tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Tensor_cn.html#tensor)) or ([LodTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/lod_tensor.html#lodtensor)). When the `save_model` API is called, two directories called `"serving_model"` and `"client_conf"` will be generated. The content of the saved model is as follows: - -``` shell -. -├── client_conf -│ ├── serving_client_conf.prototxt -│ └── serving_client_conf.stream.prototxt -└── serving_model - ├── embedding_0.w_0 - ├── fc_0.b_0 - ├── fc_0.w_0 - ├── fc_1.b_0 - ├── fc_1.w_0 - ├── fc_2.b_0 - ├── fc_2.w_0 - ├── lstm_0.b_0 - ├── lstm_0.w_0 - ├── __model__ - ├── serving_server_conf.prototxt - └── serving_server_conf.stream.prototxt -``` -`"serving_client_conf.prototxt"` and `"serving_server_conf.prototxt"` are the client side and the server side configurations of Paddle Serving, and `"serving_client_conf.stream.prototxt"` and `"serving_server_conf.stream.prototxt"` are the corresponding parts. Other contents saved in the directory are the same as Paddle saved inference model. We are considering to support `save_model` interface in Paddle training framework so that a user is not aware of the servable configurations. - -#### 2.1.2 Model loading on the server side - -Prediction logics on the server side can be defined through Paddle Serving Server API with a few lines of code, an example is as follows: -``` python -import paddle_serving_server as serving -op_maker = serving.OpMaker() -read_op = op_maker.create('general_reader') -dist_kv_op = op_maker.create('general_dist_kv') -general_infer_op = op_maker.create('general_infer') -general_response_op = op_maker.create('general_response') - -op_seq_maker = serving.OpSeqMaker() -op_seq_maker.add_op(read_op) -op_seq_maker.add_op(dist_kv_op) -op_seq_maker.add_op(general_infer_op) -op_seq_maker.add_op(general_response_op) -``` -Current Paddle Serving supports operator list on the server side as follows: - -
-
-
-
-
- -### 2.4 Micro service plugin -The underlying communication of Paddle Serving is implemented with C++ as well as the core framework, it is hard for users who do not familiar with C++ to implement new Paddle Serving Server Operators. Another approach is to use the light-weighted Web Service in Paddle Serving Server that can be viewed as a plugin. A user can implement complex data preprocessing and postprocessing logics to build a complex AI service. If access of the AI service has a large volumn, it is worth to implement the service with high performance Paddle Serving Server operators. The relationship between Web Service and RPC Service can be referenced in `User Type`. - -## 3. Industrial Features - -### 3.1 Distributed Sparse Parameter Indexing - -Distributed Sparse Parameter Indexing is commonly seen in advertising and recommendation scenarios, and is often used coupled with distributed training. The figure below explains a commonly seen architecture for online recommendation. When the recommendation service receives a request from a user, the system will automatically collects training log for the offline distributed online training. Mean while, the request is sent to Paddle Serving Server. For sparse features, distributed sparse parameter index service is called so that sparse parameters can be looked up. The dense input features together with the looked up sparse model parameters are fed into the Paddle Inference Node of the DAG in Paddle Serving Server. Then the score can be responsed through RPC to product service for item ranking. - -
-
-
-
-
- -Why do we need to support distributed sparse parameter indexing in Paddle Serving? 1) In some recommendation scenarios, the number of features can be up to hundreds of billions that a single node can not hold the parameters within random access memory. 2) Paddle Serving supports distributed sparse parameter indexing that can couple with paddle inference. Users do not need to do extra work to have a low latency inference engine with hundreds of billions of parameters. - -### 3.2 Model Management, online A/B test, Model Online Reloading - -Paddle Serving's C++ engine supports model management, online A/B test and model online reloading. Currently, python API is not released yet, please wait for the next release. - -## 4. User Types -Paddle Serving provides RPC and HTTP protocol for users. For HTTP service, we recommend users with median or small traffic services to use, and the latency is not a strict requirement. For RPC protocol, we recommend high traffic services and low latency required services to use. For users who use distributed sparse parameter indexing built-in service, it is not necessary to care about the underlying details of communication. The following figure gives out several scenarios that user may want to use Paddle Serving. - -
-
-
-
-
-
-For servable models saved from Paddle Serving IO API, users do not need to do extra coding work to startup a service, but may need some coding work on the client side. For development of Web Service plugin, a user needs to provide implementation of Web Service's preprocessing and postprocessing work if needed to get a HTTP service.
-
-### 4.1 Web Service Development
-
-Web Service has lots of open sourced framework. Currently Paddle Serving uses Flask as built-in service framework, and users are not aware of this. More efficient web service will be integrated in the furture if needed.
-
-``` python
-from paddle_serving_server.web_service import WebService
-from imdb_reader import IMDBDataset
-import sys
-
-
-class IMDBService(WebService):
- def prepare_dict(self, args={}):
- if len(args) == 0:
- exit(-1)
- self.dataset = IMDBDataset()
- self.dataset.load_resource(args["dict_file_path"])
-
- def preprocess(self, feed={}, fetch=[]):
- if "words" not in feed:
- exit(-1)
- res_feed = {}
- res_feed["words"] = self.dataset.get_words_only(feed["words"])[0]
- return res_feed, fetch
-
-
-imdb_service = IMDBService(name="imdb")
-imdb_service.load_model_config(sys.argv[1])
-imdb_service.prepare_server(
- workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu")
-imdb_service.prepare_dict({"dict_file_path": sys.argv[4]})
-imdb_service.run_server()
-```
-
-`WebService` is a Base Class, providing inheritable interfaces such `preprocess` and `postprocess` for users to implement. In the inherited class of `WebService` class, users can define any functions they want and the startup function interface is the same as RPC service.
-
-## 5. Future Plan
-
-### 5.1 Open DAG definition API
-Current version of Paddle Serving Server supports sequential type of execution flow. DAG definition API can be more helpful to users on complex tasks.
-
-### 5.2 Auto Deployment on Cloud
-In order to make deployment more easily on public cloud, Paddle Serving considers to provides Operators on Kubernetes in submitting a service job.
-
-### 5.3 Vector Indexing and Tree based Indexing
-In recommendation and advertisement systems, it is commonly seen to use vector based index or tree based indexing service to do candidate retrievals. These retrieval tasks will be built-in services of Paddle Serving.
diff --git a/doc/DOCKER.md b/doc/DOCKER.md
index 325ec906c04c708d8e62ff2ae2900bc367e049b6..0e865c66e2da32a4e0ed15df9f2632b98ffbcedf 100644
--- a/doc/DOCKER.md
+++ b/doc/DOCKER.md
@@ -1,53 +1,55 @@
-# Docker编译环境准备
+# Docker compilation environment preparation
-## 环境要求
+([简体中文](./DOCKER_CN.md)|English)
-+ 开发机上已安装Docker。
-+ 编译GPU版本需要安装nvidia-docker。
+## Environmental requirements
-## Dockerfile文件
++ Docker is installed on the development machine.
++ Compiling the GPU version requires nvidia-docker.
-[CPU版本Dockerfile](../Dockerfile)
+## Dockerfile
-[GPU版本Dockerfile](../Dockerfile.gpu)
+[CPU Version Dockerfile](../tools/Dockerfile)
-## 使用方法
+[GPU Version Dockerfile](../tools/Dockerfile.gpu)
-### 构建Docker镜像
+## Instructions
-建立新目录,复制Dockerfile内容到该目录下Dockerfile文件。
+### Building Docker Image
-执行
+Create a new directory and copy the Dockerfile to this directory.
+
+Run
```bash
docker build -t serving_compile:cpu .
```
-或者
+Or
```bash
docker build -t serving_compile:cuda9 .
```
-## 进入Docker
+## Enter Docker Container
-CPU版本请执行
+CPU Version please run
```bash
docker run -it serving_compile:cpu bash
```
-GPU版本请执行
+GPU Version please run
```bash
docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
```
-## Docker编译出的可执行文件支持的环境列表
+## List of supported environments compiled by Docker
-经过验证的环境列表如下:
+The list of supported environments is as follows::
-| CPU Docker编译出的可执行文件支持的系统环境 |
+| System Environment Supported by CPU Docker Compiled Executables |
| -------------------------- |
| Centos6 |
| Centos7 |
@@ -56,7 +58,7 @@ docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
-| GPU Docker编译出的可执行文件支持的系统环境 |
+| System Environment Supported by GPU Docker Compiled Executables |
| ---------------------------------- |
| Centos6_cuda9_cudnn7 |
| Centos7_cuda9_cudnn7 |
@@ -65,6 +67,6 @@ docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
-**备注:**
-+ 若执行预编译版本出现找不到libcrypto.so.10、libssl.so.10的情况,可以将Docker环境中的/usr/lib64/libssl.so.10与/usr/lib64/libcrypto.so.10复制到可执行文件所在目录。
-+ CPU预编译版本仅可在CPU机器上执行,GPU预编译版本仅可在GPU机器上执行。
+**Remarks:**
++ If you cannot find libcrypto.so.10 and libssl.so.10 when you execute the pre-compiled version, you can change /usr/lib64/libssl.so.10 and /usr/lib64/libcrypto.so in the Docker environment. 10 Copy to the directory where the executable is located.
++ CPU pre-compiled version can only be executed on CPU machines, GPU pre-compiled version can only be executed on GPU machines.
diff --git a/doc/DOCKER_CN.md b/doc/DOCKER_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..92cc3ac6ea34d6399d6204ff7b9ec2d12b412601
--- /dev/null
+++ b/doc/DOCKER_CN.md
@@ -0,0 +1,72 @@
+# Docker编译环境准备
+
+(简体中文|[English](./DOCKER.md))
+
+## 环境要求
+
++ 开发机上已安装Docker。
++ 编译GPU版本需要安装nvidia-docker。
+
+## Dockerfile文件
+
+[CPU版本Dockerfile](../tools/Dockerfile)
+
+[GPU版本Dockerfile](../tools/Dockerfile.gpu)
+
+## 使用方法
+
+### 构建Docker镜像
+
+建立新目录,复制Dockerfile内容到该目录下Dockerfile文件。
+
+执行
+
+```bash
+docker build -t serving_compile:cpu .
+```
+
+或者
+
+```bash
+docker build -t serving_compile:cuda9 .
+```
+
+## 进入Docker
+
+CPU版本请执行
+
+```bash
+docker run -it serving_compile:cpu bash
+```
+
+GPU版本请执行
+
+```bash
+docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
+```
+
+## Docker编译出的可执行文件支持的环境列表
+
+经过验证的环境列表如下:
+
+| CPU Docker编译出的可执行文件支持的系统环境 |
+| -------------------------- |
+| Centos6 |
+| Centos7 |
+| Ubuntu16.04 |
+| Ubuntu18.04 |
+
+
+
+| GPU Docker编译出的可执行文件支持的系统环境 |
+| ---------------------------------- |
+| Centos6_cuda9_cudnn7 |
+| Centos7_cuda9_cudnn7 |
+| Ubuntu16.04_cuda9_cudnn7 |
+| Ubuntu16.04_cuda10_cudnn7 |
+
+
+
+**备注:**
++ 若执行预编译版本出现找不到libcrypto.so.10、libssl.so.10的情况,可以将Docker环境中的/usr/lib64/libssl.so.10与/usr/lib64/libcrypto.so.10复制到可执行文件所在目录。
++ CPU预编译版本仅可在CPU机器上执行,GPU预编译版本仅可在GPU机器上执行。
diff --git a/doc/HOT_LOADING_IN_SERVING.md b/doc/HOT_LOADING_IN_SERVING.md
index 093b703786c228558739a87be948e46ee4575045..c4aae6ba884cc35654fbc4d472c8eaa4921a1e4f 100644
--- a/doc/HOT_LOADING_IN_SERVING.md
+++ b/doc/HOT_LOADING_IN_SERVING.md
@@ -1,32 +1,37 @@
# Hot Loading in Paddle Serving
+([简体中文](HOT_LOADING_IN_SERVING_CN.md)|English)
+
## Background
In the industrial scenario, it is usually the remote periodic output model, and the online server needs to pull down the new model to update the old model without service interruption.
+## Server Monitor
+
Paddle Serving provides an automatic monitoring script. After the remote address updates the model, the new model will be pulled to update the local model. At the same time, the `fluid_time_stamp` in the local model folder will be updated to realize model hot loading.
Currently, the following types of Monitors are supported:
| Monitor Type | Description | Specific options |
| :----------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
-| General | Without authentication, you can directly access the download file by `wget` (such as FTP and BOS which do not need authentication) | `general_host` General remote host. |
+| general | Without authentication, you can directly access the download file by `wget` (such as FTP and BOS which do not need authentication) | `general_host` General remote host. |
| HDFS | The remote is HDFS, and relevant commands are executed through HDFS binary | `hdfs_bin` Path of HDFS binary file. |
-| FTP | The remote is FTP, and relevant commands are executed through `ftplib`(Using this monitor, you need to install `ftplib` with command `pip install ftplib`) | `ftp_host` FTP remote host.
`ftp_port` FTP remote port.
`ftp_username` FTP username. Not used if anonymous access.
`ftp_password` FTP password. Not used if anonymous access. |
-| AFS | The remote is AFS, and relevant commands are executed through Hadoop-client | `hadoop_bin` Path of Hadoop binary file.
`hadoop_host` AFS host. Not used if set in Hadoop-client.
`hadoop_ugi` AFS ugi, Not used if set in Hadoop-client. |
-
-| Monitor Shared options | Description | Default |
-| :--------------------: | :----------------------------------------------------------: | :--------------------: |
-| `type` | Specify the type of monitor | / |
-| `remote_path` | Specify the base path for the remote | / |
-| `remote_model_name` | Specify the model name to be pulled from the remote | / |
-| `remote_donefile_name` | Specify the donefile name that marks the completion of the remote model update | / |
-| `local_path` | Specify local work path | / |
-| `local_model_name` | Specify local model name | / |
-| `local_timestamp_file` | Specify the timestamp file used locally for hot loading, The file is considered to be placed in the `local_path/local_model_name` folder. | `fluid_time_file` |
-| `local_tmp_path` | Specify the path of the folder where temporary files are stored locally. If it does not exist, it will be created automatically. | `_serving_monitor_tmp` |
-| `interval` | Specify the polling interval in seconds. | `10` |
-| `unpacked_filename` | Monitor supports the `tarfile` packaged remote model file. If the remote model is in a packaged format, you need to set this option to tell monitor the name of the extracted file. | `None` |
+| ftp | The remote is FTP, and relevant commands are executed through `ftplib`(Using this monitor, you need to install `ftplib` with command `pip install ftplib`) | `ftp_host` FTP remote host.
`ftp_port` FTP remote port.
`ftp_username` FTP username. Not used if anonymous access.
`ftp_password` FTP password. Not used if anonymous access. |
+| Afs | The remote is AFS, and relevant commands are executed through Hadoop-client | `hadoop_bin` Path of Hadoop binary file.
`hadoop_host` AFS host. Not used if set in Hadoop-client.
`hadoop_ugi` AFS ugi, Not used if set in Hadoop-client. |
+
+| Monitor Shared options | Description | Default |
+| :--------------------: | :----------------------------------------------------------: | :----------------------------------: |
+| `type` | Specify the type of monitor | / |
+| `remote_path` | Specify the base path for the remote | / |
+| `remote_model_name` | Specify the model name to be pulled from the remote | / |
+| `remote_donefile_name` | Specify the donefile name that marks the completion of the remote model update | / |
+| `local_path` | Specify local work path | / |
+| `local_model_name` | Specify local model name | / |
+| `local_timestamp_file` | Specify the timestamp file used locally for hot loading, The file is considered to be placed in the `local_path/local_model_name` folder. | `fluid_time_file` |
+| `local_tmp_path` | Specify the path of the folder where temporary files are stored locally. If it does not exist, it will be created automatically. | `_serving_monitor_tmp` |
+| `interval` | Specify the polling interval in seconds. | `10` |
+| `unpacked_filename` | Monitor supports the `tarfile` packaged remote model file. If the remote model is in a packaged format, you need to set this option to tell monitor the name of the extracted file. | `None` |
+| `debug` | If the `-- debug` option is added, more detailed intermediate information will be output. | This option is not added by default. |
The following is an example of HDFSMonitor to show the model hot loading of Paddle Serving.
@@ -150,11 +155,44 @@ python -m paddle_serving_server.monitor \
--remote_model_name='uci_housing.tar.gz' --remote_donefile_name='donefile' \
--local_path='.' --local_model_name='uci_housing_model' \
--local_timestamp_file='fluid_time_file' --local_tmp_path='_tmp' \
- --unpacked_filename='uci_housing_model'
+ --unpacked_filename='uci_housing_model' --debug
```
The above code monitors the remote timestamp file `/donefile` of the remote HDFS address `/` every 10 seconds by polling. When the remote timestamp file changes, the remote model is considered to have been updated. Pull the remote packaging model `/uci_housing.tar.gz` to the local temporary path `./_tmp/uci_housing.tar.gz`. After unpacking to get the model file `./_tmp/uci_housing_model`, update the local model `./uci_housing_model` and the model timestamp file `./uci_housing_model/fluid_time_file` of Paddle Serving.
+The expected output is as follows:
+
+```shell
+2020-04-02 08:38 INFO [monitor.py:85] _hdfs_bin: /hadoop-3.1.2/bin/hdfs
+2020-04-02 08:38 INFO [monitor.py:244] HDFS prefix cmd: /hadoop-3.1.2/bin/hdfs dfs
+2020-04-02 08:38 INFO [monitor.py:85] _remote_path: /
+2020-04-02 08:38 INFO [monitor.py:85] _remote_model_name: uci_housing.tar.gz
+2020-04-02 08:38 INFO [monitor.py:85] _remote_donefile_name: donefile
+2020-04-02 08:38 INFO [monitor.py:85] _local_model_name: uci_housing_model
+2020-04-02 08:38 INFO [monitor.py:85] _local_path: .
+2020-04-02 08:38 INFO [monitor.py:85] _local_timestamp_file: fluid_time_file
+2020-04-02 08:38 INFO [monitor.py:85] _local_tmp_path: _tmp
+2020-04-02 08:38 INFO [monitor.py:85] _interval: 10
+2020-04-02 08:38 DEBUG [monitor.py:249] check cmd: /hadoop-3.1.2/bin/hdfs dfs -stat "%Y" /donefile
+2020-04-02 08:38 DEBUG [monitor.py:251] resp: 1585816693193
+2020-04-02 08:38 INFO [monitor.py:138] doneilfe(donefile) changed.
+2020-04-02 08:38 DEBUG [monitor.py:261] pull cmd: /hadoop-3.1.2/bin/hdfs dfs -get -f /uci_housing.tar.gz _tmp
+2020-04-02 08:38 INFO [monitor.py:144] pull remote model(uci_housing.tar.gz).
+2020-04-02 08:38 INFO [monitor.py:98] unpack remote file(uci_housing.tar.gz).
+2020-04-02 08:38 DEBUG [monitor.py:108] remove packed file(uci_housing.tar.gz).
+2020-04-02 08:38 INFO [monitor.py:110] using unpacked filename: uci_housing_model.
+2020-04-02 08:38 DEBUG [monitor.py:175] update model cmd: cp -r _tmp/uci_housing_model/* ./uci_housing_model
+2020-04-02 08:38 INFO [monitor.py:152] update local model(uci_housing_model).
+2020-04-02 08:38 DEBUG [monitor.py:184] update timestamp cmd: touch ./uci_housing_model/fluid_time_file
+2020-04-02 08:38 INFO [monitor.py:157] update model timestamp(fluid_time_file).
+2020-04-02 08:38 INFO [monitor.py:161] sleep 10s.
+2020-04-02 08:38 DEBUG [monitor.py:249] check cmd: /hadoop-3.1.2/bin/hdfs dfs -stat "%Y" /donefile
+2020-04-02 08:38 DEBUG [monitor.py:251] resp: 1585816693193
+2020-04-02 08:38 INFO [monitor.py:161] sleep 10s.
+```
+
+
+
#### View server logs
View the running log of the server with the following command:
diff --git a/doc/HOT_LOADING_IN_SERVING_CN.md b/doc/HOT_LOADING_IN_SERVING_CN.md
index c210bf90b4b982ef4b777ee44e85a1684eacc9cb..688bd6dccf368dad97f3423cfbe6ddaf111defa2 100644
--- a/doc/HOT_LOADING_IN_SERVING_CN.md
+++ b/doc/HOT_LOADING_IN_SERVING_CN.md
@@ -1,5 +1,7 @@
# Paddle Serving中的模型热加载
+(简体中文|[English](HOT_LOADING_IN_SERVING.md))
+
## 背景
在实际的工业场景下,通常是远端定期不间断产出模型,线上服务端需要在服务不中断的情况下拉取新模型对旧模型进行更新迭代。
@@ -29,6 +31,7 @@ Paddle Serving提供了一个自动监控脚本,远端地址更新模型后会
| `local_tmp_path` | 指定本地存放临时文件的文件夹路径,若不存在则自动创建。 | `_serving_monitor_tmp` |
| `interval` | 指定轮询间隔时间,单位为秒。 | `10` |
| `unpacked_filename` | Monitor支持tarfile打包的远程模型。如果远程模型是打包格式,则需要设置该选项来告知Monitor解压后的文件名。 | `None` |
+| `debug` | 如果添加`--debug`选项,则输出更详细的中间信息。 | 默认不添加该选项 |
下面通过HDFSMonitor示例来展示Paddle Serving的模型热加载功能。
@@ -148,15 +151,46 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
```shell
python -m paddle_serving_server.monitor \
---type='hdfs' --hdfs_bin='/hadoop-3.1.2/bin/hdfs' --remote_path='/' \
---remote_model_name='uci_housing.tar.gz' --remote_donefile_name='donefile' \
---local_path='.' --local_model_name='uci_housing_model' \
---local_timestamp_file='fluid_time_file' --local_tmp_path='_tmp' \
---unpacked_filename='uci_housing_model'
+ --type='hdfs' --hdfs_bin='/hadoop-3.1.2/bin/hdfs' --remote_path='/' \
+ --remote_model_name='uci_housing.tar.gz' --remote_donefile_name='donefile' \
+ --local_path='.' --local_model_name='uci_housing_model' \
+ --local_timestamp_file='fluid_time_file' --local_tmp_path='_tmp' \
+ --unpacked_filename='uci_housing_model' --debug
```
上面代码通过轮询方式监控远程HDFS地址`/`的时间戳文件`/donefile`,当时间戳变更则认为远程模型已经更新,将远程打包模型`/uci_housing.tar.gz`拉取到本地临时路径`./_tmp/uci_housing.tar.gz`下,解包出模型文件`./_tmp/uci_housing_model`后,更新本地模型`./uci_housing_model`以及Paddle Serving的时间戳文件`./uci_housing_model/fluid_time_file`。
+预计输出如下:
+
+```shell
+2020-04-02 08:38 INFO [monitor.py:85] _hdfs_bin: /hadoop-3.1.2/bin/hdfs
+2020-04-02 08:38 INFO [monitor.py:244] HDFS prefix cmd: /hadoop-3.1.2/bin/hdfs dfs
+2020-04-02 08:38 INFO [monitor.py:85] _remote_path: /
+2020-04-02 08:38 INFO [monitor.py:85] _remote_model_name: uci_housing.tar.gz
+2020-04-02 08:38 INFO [monitor.py:85] _remote_donefile_name: donefile
+2020-04-02 08:38 INFO [monitor.py:85] _local_model_name: uci_housing_model
+2020-04-02 08:38 INFO [monitor.py:85] _local_path: .
+2020-04-02 08:38 INFO [monitor.py:85] _local_timestamp_file: fluid_time_file
+2020-04-02 08:38 INFO [monitor.py:85] _local_tmp_path: _tmp
+2020-04-02 08:38 INFO [monitor.py:85] _interval: 10
+2020-04-02 08:38 DEBUG [monitor.py:249] check cmd: /hadoop-3.1.2/bin/hdfs dfs -stat "%Y" /donefile
+2020-04-02 08:38 DEBUG [monitor.py:251] resp: 1585816693193
+2020-04-02 08:38 INFO [monitor.py:138] doneilfe(donefile) changed.
+2020-04-02 08:38 DEBUG [monitor.py:261] pull cmd: /hadoop-3.1.2/bin/hdfs dfs -get -f /uci_housing.tar.gz _tmp
+2020-04-02 08:38 INFO [monitor.py:144] pull remote model(uci_housing.tar.gz).
+2020-04-02 08:38 INFO [monitor.py:98] unpack remote file(uci_housing.tar.gz).
+2020-04-02 08:38 DEBUG [monitor.py:108] remove packed file(uci_housing.tar.gz).
+2020-04-02 08:38 INFO [monitor.py:110] using unpacked filename: uci_housing_model.
+2020-04-02 08:38 DEBUG [monitor.py:175] update model cmd: cp -r _tmp/uci_housing_model/* ./uci_housing_model
+2020-04-02 08:38 INFO [monitor.py:152] update local model(uci_housing_model).
+2020-04-02 08:38 DEBUG [monitor.py:184] update timestamp cmd: touch ./uci_housing_model/fluid_time_file
+2020-04-02 08:38 INFO [monitor.py:157] update model timestamp(fluid_time_file).
+2020-04-02 08:38 INFO [monitor.py:161] sleep 10s.
+2020-04-02 08:38 DEBUG [monitor.py:249] check cmd: /hadoop-3.1.2/bin/hdfs dfs -stat "%Y" /donefile
+2020-04-02 08:38 DEBUG [monitor.py:251] resp: 1585816693193
+2020-04-02 08:38 INFO [monitor.py:161] sleep 10s.
+```
+
#### 查看Server日志
通过下面命令查看Server的运行日志:
diff --git a/doc/IMDB_GO_CLIENT.md b/doc/IMDB_GO_CLIENT.md
index 5b10192597f393d65f1387bfb39615e1777ec2d6..5befc0226235dd599b980d98594dba78e54bf530 100644
--- a/doc/IMDB_GO_CLIENT.md
+++ b/doc/IMDB_GO_CLIENT.md
@@ -1,5 +1,7 @@
# How to use Go Client of Paddle Serving
+([简体中文](./IMDB_GO_CLIENT_CN.md)|English)
+
This document shows how to use Go as your client language. For Go client in Paddle Serving, a simple client package is provided https://github.com/PaddlePaddle/Serving/tree/develop/go/serving_client, a user can import this package as needed. Here is a simple example of sentiment analysis task based on IMDB dataset.
### Install
@@ -15,7 +17,7 @@ pip install paddle-serving-server
### Download Text Classification Model
``` shell
-wget https://paddle-serving.bj.bcebos.com/data%2Ftext_classification%2Fimdb_serving_example.tar.gz
+wget https://paddle-serving.bj.bcebos.com/data/text_classification/imdb_serving_example.tar.gz
tar -xzf imdb_serving_example.tar.gz
```
diff --git a/doc/IMDB_GO_CLIENT_CN.md b/doc/IMDB_GO_CLIENT_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..d14abe647038846aeeeebf9484f1c02e151b4275
--- /dev/null
+++ b/doc/IMDB_GO_CLIENT_CN.md
@@ -0,0 +1,194 @@
+# 如何在Paddle Serving使用Go Client
+
+(简体中文|[English](./IMDB_GO_CLIENT.md))
+
+本文档说明了如何将Go用作客户端语言。对于Paddle Serving中的Go客户端,提供了一个简单的客户端程序包https://github.com/PaddlePaddle/Serving/tree/develop/go/serving_client, 用户可以根据需要引用该程序包。这是一个基于IMDB数据集的情感分析任务的简单示例。
+
+### 安装
+
+我们假设您已经安装了Go 1.9.2或更高版本,并且安装了python 2.7版本
+
+```shell
+go get github.com/PaddlePaddle/Serving/go/serving_client
+go get github.com/PaddlePaddle/Serving/go/proto
+pip install paddle-serving-server
+```
+### 下载文本分类模型
+
+```shell
+wget https://paddle-serving.bj.bcebos.com/data/text_classification/imdb_serving_example.tar.gz
+tar -xzf imdb_serving_example.tar.gz
+```
+
+### 服务器端代码
+
+```python
+# test_server_go.py
+import os
+import sys
+from paddle_serving_server import OpMaker
+from paddle_serving_server import OpSeqMaker
+from paddle_serving_server import Server
+
+op_maker = OpMaker ()
+read_op = op_maker.create ('general_text_reader')
+general_infer_op = op_maker.create ('general_infer')
+general_response_op = op_maker.create ('general_text_response')
+
+op_seq_maker = OpSeqMaker ()
+op_seq_maker.add_op (read_op)
+op_seq_maker.add_op (general_infer_op)
+op_seq_maker.add_op (general_response_op)
+
+server = Server ()
+server.set_op_sequence (op_seq_maker.get_op_sequence ())
+server.load_model_config (sys.argv [1])
+server.prepare_server (workdir = "work_dir1", port = 9292, device = "cpu")
+server.run_server ()
+```
+
+### 启动服务器
+
+```shell
+python test_server_go.py ./serving_server_model/ 9292
+```
+
+### 客户端代码示例
+
+```go
+// imdb_client.go
+package main
+
+import (
+ "io"
+ "fmt"
+ "strings"
+ "bufio"
+ "strconv"
+ "os"
+ serving_client "github.com/PaddlePaddle/Serving/go/serving_client"
+)
+
+func main () {
+ var config_file_path string
+ config_file_path = os.Args [1]
+ handle: = serving_client.LoadModelConfig (config_file_path)
+ handle = serving_client.Connect ("127.0.0.1", "9292", handle)
+
+ test_file_path: = os.Args [2]
+ fi, err: = os.Open (test_file_path)
+ if err! = nil {
+ fmt.Print (err)
+ }
+
+ defer fi.Close ()
+ br: = bufio.NewReader (fi)
+
+ fetch: = [] string {"cost", "acc", "prediction"}
+
+ var result map [string] [] float32
+
+ for {
+ line, err: = br.ReadString ('\ n')
+if err == io.EOF {
+break
+}
+
+line = strings.Trim (line, "\ n")
+
+var words = [] int64 {}
+
+s: = strings.Split (line, "")
+value, err: = strconv.Atoi (s [0])
+var feed_int_map map [string] [] int64
+
+for _, v: = range s [1: value + 1] {
+int_v, _: = strconv.Atoi (v)
+words = append (words, int64 (int_v))
+}
+
+label, err: = strconv.Atoi (s [len (s) -1])
+
+if err! = nil {
+panic (err)
+}
+
+feed_int_map = map [string] [] int64 {}
+feed_int_map ["words"] = words
+feed_int_map ["label"] = [] int64 {int64 (label)}
+Ranch
+result = serving_client.Predict (handle, feed_int_map, fetch)
+fmt.Println (result ["prediction"] [1], "\ t", int64 (label))
+ }
+}
+```
+
+### 基于IMDB测试集的预测
+
+```python
+go run imdb_client.go serving_client_conf / serving_client_conf.stream.prototxt test.data> result
+```
+
+### 计算精度
+
+```python
+// acc.go
+package main
+
+import (
+ "io"
+ "os"
+ "fmt"
+ "bufio"
+ "strings"
+ "strconv"
+)
+
+func main () {
+ score_file: = os.Args [1]
+ fi, err: = os.Open (score_file)
+ if err! = nil {
+ fmt.Print (err)
+ }
+
+ defer fi.Close ()
+ br: = bufio.NewReader (fi)
+
+ total: = int (0)
+ acc: = int (0)
+ for {
+ line, err: = br.ReadString ('\ n')
+ if err == io.EOF {
+ break
+ }
+
+ line = strings.Trim (line, "\ n")
+ s: = strings.Split (line, "\ t")
+ prob_str: = strings.Trim (s [0], "")
+ label_str: = strings.Trim (s [1], "")
+ prob, err: = strconv.ParseFloat (prob_str, 32)
+ if err! = nil {
+ panic (err)
+ }
+ label, err: = strconv.ParseFloat (label_str, 32)
+ if err! = nil {
+ panic (err)
+ }
+ if (prob-0.5) * (label-0.5)> 0 {
+ acc ++
+ }
+ total ++
+ }
+ fmt.Println ("total num:", total)
+ fmt.Println ("acc num:", acc)
+ fmt.Println ("acc:", float32 (acc) / float32 (total))
+
+}
+```
+
+```shell
+go run acc.go result
+total num: 25000
+acc num: 22014
+acc: 0.88056
+```
diff --git a/doc/NEW_OPERATOR.md b/doc/NEW_OPERATOR.md
index f839be94aaa2ae9993d935c0af69bcde33b9d66f..ab1ff42adea44eec26e84bd4356bc4313d420ce2 100644
--- a/doc/NEW_OPERATOR.md
+++ b/doc/NEW_OPERATOR.md
@@ -1,5 +1,7 @@
# How to write an general operator?
+([简体中文](./NEW_OPERATOR_CN.md)|English)
+
In this document, we mainly focus on how to develop a new server side operator for PaddleServing. Before we start to write a new operator, let's look at some sample code to get the basic idea of writing a new operator for server. We assume you have known the basic computation logic on server side of PaddleServing, please reference to []() if you do not know much about it. The following code can be visited at `core/general-server/op` of Serving repo.
``` c++
diff --git a/doc/NEW_OPERATOR_CN.md b/doc/NEW_OPERATOR_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..d659b5f328cfbfc48ec7f3016037b12f34139b73
--- /dev/null
+++ b/doc/NEW_OPERATOR_CN.md
@@ -0,0 +1,149 @@
+# 如何开发一个新的General Op?
+
+(简体中文|[English](./NEW_OPERATOR.md))
+
+在本文档中,我们主要集中于如何为Paddle Serving开发新的服务器端运算符。 在开始编写新运算符之前,让我们看一些示例代码以获得为服务器编写新运算符的基本思想。 我们假设您已经知道Paddle Serving服务器端的基本计算逻辑。 下面的代码您可以在 Serving代码库下的 `core/general-server/op` 目录查阅。
+
+
+``` c++
+// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#pragma once
+#include
+
+imdb_reader.py
@@ -102,17 +102,17 @@ class IMDBDataset(dg.MultiSlotDataGenerator):
```
nets.py
@@ -156,7 +156,7 @@ def cnn_net(data,
local_train.py
@@ -172,7 +172,7 @@ logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger("fluid")
logger.setLevel(logging.INFO)
-# 加载词典文件
+# load dict file
def load_vocab(filename):
vocab = {}
with open(filename) as f:
@@ -190,11 +190,11 @@ if __name__ == "__main__":
vocab = load_vocab('imdb.vocab')
dict_dim = len(vocab)
- #定义模型输入
+ #define model input
data = fluid.layers.data(
name="words", shape=[1], dtype="int64", lod_level=1)
label = fluid.layers.data(name="label", shape=[1], dtype="int64")
- #定义dataset,train_data为训练数据目录
+ #define dataset,train_data is the dataset directory
dataset = fluid.DatasetFactory().create_dataset()
filelist = ["train_data/%s" % x for x in os.listdir("train_data")]
dataset.set_use_var([data, label])
@@ -203,11 +203,11 @@ if __name__ == "__main__":
dataset.set_batch_size(4)
dataset.set_filelist(filelist)
dataset.set_thread(10)
- #定义模型
+ #define model
avg_cost, acc, prediction = cnn_net(data, label, dict_dim)
optimizer = fluid.optimizer.SGD(learning_rate=0.001)
optimizer.minimize(avg_cost)
- #执行训练
+ #execute training
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
epochs = 100
@@ -219,7 +219,7 @@ if __name__ == "__main__":
program=fluid.default_main_program(), dataset=dataset, debug=False)
logger.info("TRAIN --> pass: {}".format(i))
if i == 64:
- #在训练结束时使用PaddleServing中的模型保存接口保存出Serving所需的模型和配置文件
+ #At the end of training, use the model save interface in PaddleServing to save the models and configuration files required by Serving
serving_io.save_model("{}_model".format(model_name),
"{}_client_conf".format(model_name),
{"words": data}, {"prediction": prediction},
@@ -228,32 +228,32 @@ if __name__ == "__main__":
test_client.py
@@ -267,7 +267,7 @@ client = Client()
client.load_client_config(sys.argv[1])
client.connect(["127.0.0.1:9292"])
-#在这里复用了数据预处理部分的代码将原始文本转换成数字id
+#The code of the data preprocessing part is reused here to convert the original text into a numeric id
imdb_dataset = IMDBDataset()
imdb_dataset.load_resource(sys.argv[2])
@@ -281,30 +281,29 @@ for line in sys.stdin:
text_clssify_service.py
@@ -313,7 +312,7 @@ from paddle_serving_server.web_service import WebService
from imdb_reader import IMDBDataset
import sys
-#继承框架中的WebService类
+#extend class WebService
class IMDBService(WebService):
def prepare_dict(self, args={}):
if len(args) == 0:
@@ -321,7 +320,7 @@ class IMDBService(WebService):
self.dataset = IMDBDataset()
self.dataset.load_resource(args["dict_file_path"])
- #重写preprocess方法来实现数据预处理,这里也复用了训练时使用的reader脚本
+ #rewrite preprocess() to implement data preprocessing, here we reuse reader script for training
def preprocess(self, feed={}, fetch=[]):
if "words" not in feed:
exit(-1)
@@ -329,7 +328,7 @@ class IMDBService(WebService):
res_feed["words"] = self.dataset.get_words_only(feed["words"])[0]
return res_feed, fetch
-#这里需要使用name参数指定预测服务的名称,
+#Here you need to use the name parameter to specify the name of the prediction service.
imdb_service = IMDBService(name="imdb")
imdb_service.load_model_config(sys.argv[1])
imdb_service.prepare_server(
@@ -339,24 +338,24 @@ imdb_service.run_server()
```
imdb_reader.py
+
+```python
+import sys
+import os
+import paddle
+import re
+import paddle.fluid.incubate.data_generator as dg
+
+
+class IMDBDataset(dg.MultiSlotDataGenerator):
+ def load_resource(self, dictfile):
+ self._vocab = {}
+ wid = 0
+ with open(dictfile) as f:
+ for line in f:
+ self._vocab[line.strip()] = wid
+ wid += 1
+ self._unk_id = len(self._vocab)
+ self._pattern = re.compile(r'(;|,|\.|\?|!|\s|\(|\))')
+ self.return_value = ("words", [1, 2, 3, 4, 5, 6]), ("label", [0])
+
+ def get_words_only(self, line):
+ sent = line.lower().replace("
", " ").strip()
+ words = [x for x in self._pattern.split(sent) if x and x != " "]
+ feas = [
+ self._vocab[x] if x in self._vocab else self._unk_id for x in words
+ ]
+ return feas
+
+ def get_words_and_label(self, line):
+ send = '|'.join(line.split('|')[:-1]).lower().replace("
",
+ " ").strip()
+ label = [int(line.split('|')[-1])]
+
+ words = [x for x in self._pattern.split(send) if x and x != " "]
+ feas = [
+ self._vocab[x] if x in self._vocab else self._unk_id for x in words
+ ]
+ return feas, label
+
+ def infer_reader(self, infer_filelist, batch, buf_size):
+ def local_iter():
+ for fname in infer_filelist:
+ with open(fname, "r") as fin:
+ for line in fin:
+ feas, label = self.get_words_and_label(line)
+ yield feas, label
+
+ import paddle
+ batch_iter = paddle.batch(
+ paddle.reader.shuffle(
+ local_iter, buf_size=buf_size),
+ batch_size=batch)
+ return batch_iter
+
+ def generate_sample(self, line):
+ def memory_iter():
+ for i in range(1000):
+ yield self.return_value
+
+ def data_iter():
+ feas, label = self.get_words_and_label(line)
+ yield ("words", feas), ("label", label)
+
+ return data_iter
+```
+nets.py
+
+```python
+import sys
+import time
+import numpy as np
+
+import paddle
+import paddle.fluid as fluid
+
+def cnn_net(data,
+ label,
+ dict_dim,
+ emb_dim=128,
+ hid_dim=128,
+ hid_dim2=96,
+ class_dim=2,
+ win_size=3):
+ """ conv net. """
+ emb = fluid.layers.embedding(
+ input=data, size=[dict_dim, emb_dim], is_sparse=True)
+
+ conv_3 = fluid.nets.sequence_conv_pool(
+ input=emb,
+ num_filters=hid_dim,
+ filter_size=win_size,
+ act="tanh",
+ pool_type="max")
+
+ fc_1 = fluid.layers.fc(input=[conv_3], size=hid_dim2)
+
+ prediction = fluid.layers.fc(input=[fc_1], size=class_dim, act="softmax")
+ cost = fluid.layers.cross_entropy(input=prediction, label=label)
+ avg_cost = fluid.layers.mean(x=cost)
+ acc = fluid.layers.accuracy(input=prediction, label=label)
+
+ return avg_cost, acc, prediction
+```
+
+local_train.py
+
+```python
+import os
+import sys
+import paddle
+import logging
+import paddle.fluid as fluid
+
+logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s')
+logger = logging.getLogger("fluid")
+logger.setLevel(logging.INFO)
+
+# 加载词典文件
+def load_vocab(filename):
+ vocab = {}
+ with open(filename) as f:
+ wid = 0
+ for line in f:
+ vocab[line.strip()] = wid
+ wid += 1
+ vocab["test_client.py
+
+```python
+from paddle_serving_client import Client
+from imdb_reader import IMDBDataset
+import sys
+
+client = Client()
+client.load_client_config(sys.argv[1])
+client.connect(["127.0.0.1:9292"])
+
+#在这里复用了数据预处理部分的代码将原始文本转换成数字id
+imdb_dataset = IMDBDataset()
+imdb_dataset.load_resource(sys.argv[2])
+
+for line in sys.stdin:
+ word_ids, label = imdb_dataset.get_words_and_label(line)
+ feed = {"words": word_ids}
+ fetch = ["acc", "cost", "prediction"]
+ fetch_map = client.predict(feed=feed, fetch=fetch)
+ print("{} {}".format(fetch_map["prediction"][1], label[0]))
+```
+
+text_clssify_service.py
+
+```python
+from paddle_serving_server.web_service import WebService
+from imdb_reader import IMDBDataset
+import sys
+
+#继承框架中的WebService类
+class IMDBService(WebService):
+ def prepare_dict(self, args={}):
+ if len(args) == 0:
+ exit(-1)
+ self.dataset = IMDBDataset()
+ self.dataset.load_resource(args["dict_file_path"])
+
+ #重写preprocess方法来实现数据预处理,这里也复用了训练时使用的reader脚本
+ def preprocess(self, feed={}, fetch=[]):
+ if "words" not in feed:
+ exit(-1)
+ res_feed = {}
+ res_feed["words"] = self.dataset.get_words_only(feed["words"])[0]
+ return res_feed, fetch
+
+#这里需要使用name参数指定预测服务的名称,
+imdb_service = IMDBService(name="imdb")
+imdb_service.load_model_config(sys.argv[1])
+imdb_service.prepare_server(
+ workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu")
+imdb_service.prepare_dict({"dict_file_path": sys.argv[4]})
+imdb_service.run_server()
+```
+