diff --git a/README.md b/README.md
index 363213e94375cfa3f9ab0fbd48a10428a08ccf16..adc2e8d4454d1584907d23dbda24087a603c68fc 100644
--- a/README.md
+++ b/README.md
@@ -18,19 +18,19 @@
Motivation
-We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you can put the model online without much effort. A demo of serving is as follows:
+We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you are also capable to deploy the model online easily. A demo of Paddle Serving is as follows:
Some Key Features
-- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed **with one line command**.
+- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-- **Distributed Key-Value indexing** supported that is especially useful for large scale sparse features as model inputs.
-- **Highly concurrent and efficient communication** between clients and servers.
-- **Multiple programming languages** supported on client side, such as Golang, C++ and python
-- **Extensible framework design** that can support model serving beyond Paddle.
+- **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
+- **Highly concurrent and efficient communication** between clients and servers supported.
+- **Multiple programming languages** supported on client side, such as Golang, C++ and python.
+- **Extensible framework design** which can support model serving beyond Paddle.
Installation
@@ -53,7 +53,7 @@ Paddle Serving provides HTTP and RPC based service for users to access
### HTTP service
-Paddle Serving provides a built-in python module called `paddle_serving_server.serve` that can start a rpc service or a http service with one-line command. If we specify the argument `--name uci`, it means that we will have a HTTP service with a url of `$IP:$PORT/uci/prediction`
+Paddle Serving provides a built-in python module called `paddle_serving_server.serve` that can start a RPC service or a http service with one-line command. If we specify the argument `--name uci`, it means that we will have a HTTP service with a url of `$IP:$PORT/uci/prediction`
``` shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci
```
@@ -76,7 +76,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.25
### RPC service
-A user can also start a rpc service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here.
+A user can also start a RPC service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here.
``` shell
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
```
@@ -232,14 +232,14 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
### New to Paddle Serving
- [How to save a servable model?](doc/SAVE.md)
-- [An end-to-end tutorial from training to serving(Chinese)](doc/TRAIN_TO_SERVICE.md)
-- [Write Bert-as-Service in 10 minutes(Chinese)](doc/BERT_10_MINS.md)
+- [An End-to-end tutorial from training to inference service deployment](doc/TRAIN_TO_SERVICE.md)
+- [Write Bert-as-Service in 10 minutes](doc/BERT_10_MINS.md)
### Developers
- [How to config Serving native operators on server side?](doc/SERVER_DAG.md)
-- [How to develop a new Serving operator](doc/NEW_OPERATOR.md)
+- [How to develop a new Serving operator?](doc/NEW_OPERATOR.md)
- [Golang client](doc/IMDB_GO_CLIENT.md)
-- [Compile from source code(Chinese)](doc/COMPILE.md)
+- [Compile from source code](doc/COMPILE.md)
### About Efficiency
- [How to profile Paddle Serving latency?](python/examples/util)
@@ -251,8 +251,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
### Design
-- [Design Doc(Chinese)](doc/DESIGN_DOC.md)
-- [Design Doc(English)](doc/DESIGN_DOC_EN.md)
+- [Design Doc](doc/DESIGN_DOC.md)
Community
diff --git a/README_CN.md b/README_CN.md
index 5912405250d91cffe12a2d427218900435737e0a..ddb06309a36337a1230bbbc7d9612ce7a407b2d2 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -1,18 +1,31 @@
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-[![Build Status](https://img.shields.io/travis/com/PaddlePaddle/Serving/develop)](https://travis-ci.com/PaddlePaddle/Serving)
-[![Release](https://img.shields.io/badge/Release-0.0.3-yellowgreen)](Release)
-[![Issues](https://img.shields.io/github/issues/PaddlePaddle/Serving)](Issues)
-[![License](https://img.shields.io/github/license/PaddlePaddle/Serving)](LICENSE)
-[![Slack](https://img.shields.io/badge/Join-Slack-green)](https://paddleserving.slack.com/archives/CU0PB4K35)
+
动机
+
+Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务。 **本项目目标**: 当用户使用 [Paddle](https://github.com/PaddlePaddle/Paddle) 训练了一个深度神经网络,就同时拥有了该模型的预测服务。
-## 动机
-Paddle Serving 帮助深度学习开发者轻易部署在线预测服务。 **本项目目标**: 只要你使用 [Paddle](https://github.com/PaddlePaddle/Paddle) 训练了一个深度神经网络,你就同时拥有了该模型的预测服务。
-## 核心功能
+核心功能
+
- 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**.
- 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等.
- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
@@ -20,7 +33,7 @@ Paddle Serving 帮助深度学习开发者轻易部署在线预测服务。 **
- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
- **可伸缩框架设计** 可支持不限于Paddle的模型服务.
-## 安装
+安装
强烈建议您在Docker内构建Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md)
@@ -29,7 +42,9 @@ pip install paddle-serving-client
pip install paddle-serving-server
```
-## 快速启动示例
+快速启动示例
+
+波士顿房价预测
``` shell
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/uci_housing.tar.gz
@@ -71,9 +86,42 @@ curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.25
python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
```
-Python客户端请求
+Paddle Serving 为用户提供了基于 HTTP 和 RPC 的服务
+
+
+HTTP服务
+
+Paddle Serving提供了一个名为`paddle_serving_server.serve`的内置python模块,可以使用单行命令启动RPC服务或HTTP服务。如果我们指定参数`--name uci`,则意味着我们将拥有一个HTTP服务,其URL为$IP:$PORT/uci/prediction`。
+
+``` shell
+python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci
+```
+
+
+| Argument | Type | Default | Description |
+|--------------|------|-----------|--------------------------------|
+| `thread` | int | `4` | Concurrency of current service |
+| `port` | int | `9292` | Exposed port of current service to users|
+| `name` | str | `""` | Service name, can be used to generate HTTP request url |
+| `model` | str | `""` | Path of paddle model directory to be served |
+
+我们使用 `curl` 命令来发送HTTP POST请求给刚刚启动的服务。用户也可以调用python库来发送HTTP POST请求,请参考英文文档 [requests](https://requests.readthedocs.io/en/master/)。
+
+
+``` shell
+curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727, -0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332], "fetch":["price"]}' http://127.0.0.1:9292/uci/prediction
+```
+
+RPC服务
+
+用户还可以使用`paddle_serving_server.serve`启动RPC服务。 尽管用户需要基于Paddle Serving的python客户端API进行一些开发,但是RPC服务通常比HTTP服务更快。需要指出的是这里我们没有指定`--name`。
+
+``` shell
+python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
+```
``` python
+# A user can visit rpc service through paddle_serving_client API
from paddle_serving_client import Client
client = Client()
@@ -241,23 +289,23 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
### FAQ
- [常见问答](doc/deprecated/FAQ.md)
-## 文档
+### 设计文档
+- [Paddle Serving设计文档](doc/DESIGN_DOC_CN.md)
-[开发文档](doc/DESIGN.md)
+社区
-[如何在服务器端配置本地Op?](doc/SERVER_DAG.md)
+### Slack
-[如何开发一个新的Op?](doc/NEW_OPERATOR.md)
+想要同开发者和其他用户沟通吗?欢迎加入我们的 [Slack channel](https://paddleserving.slack.com/archives/CUBPKHKMJ)
-[Golang 客户端](doc/IMDB_GO_CLIENT.md)
+### 贡献代码
-[从源码编译](doc/COMPILE.md)
+如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/CONTRIBUTE.md)
-[常见问答](doc/FAQ.md)
+### 反馈
-## 加入社区
-如果您想要联系其他用户和开发者,欢迎加入我们的 [Slack channel](https://paddleserving.slack.com/archives/CUBPKHKMJ)
+如有任何反馈或是bug,请在 [GitHub Issue](https://github.com/PaddlePaddle/Serving/issues)提交
-## 如何贡献代码
+### License
-如果您想要贡献代码给Paddle Serving,请参考[Contribution Guidelines](doc/CONTRIBUTE.md)
+[Apache 2.0 License](https://github.com/PaddlePaddle/Serving/blob/develop/LICENSE)
diff --git a/doc/BERT_10_MINS.md b/doc/BERT_10_MINS.md
index 706c14bb116beca04eca94c27cc4f15f424ef0de..e668b3207c5228309d131e2353e815d26c8d4625 100644
--- a/doc/BERT_10_MINS.md
+++ b/doc/BERT_10_MINS.md
@@ -1,11 +1,14 @@
-## 十分钟构建Bert-As-Service
+## Build Bert-As-Service in 10 minutes
-Bert-As-Service的目标是给定一个句子,服务可以将句子表示成一个语义向量返回给用户。[Bert模型](https://arxiv.org/abs/1810.04805)是目前NLP领域的热门模型,在多种公开的NLP任务上都取得了很好的效果,使用Bert模型计算出的语义向量来做其他NLP模型的输入对提升模型的表现也有很大的帮助。Bert-As-Service可以让用户很方便地获取文本的语义向量表示并应用到自己的任务中。为了实现这个目标,我们通过四个步骤说明使用Paddle Serving在十分钟内就可以搭建一个这样的服务。示例中所有的代码和文件均可以在Paddle Serving的[示例](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert)中找到。
+([简体中文](./BERT_10_MINS_CN.md)|English)
-#### Step1:保存可服务模型
+The goal of Bert-As-Service is to give a sentence, and the service can represent the sentence as a semantic vector and return it to the user. [Bert model](https://arxiv.org/abs/1810.04805) is a popular model in the current NLP field. It has achieved good results on a variety of public NLP tasks. The semantic vector calculated by the Bert model is used as input to other NLP models, which will also greatly improve the performance of the model. Bert-As-Service allows users to easily obtain the semantic vector representation of text and apply it to their own tasks. In order to achieve this goal, we have shown in four steps that using Paddle Serving can build such a service in ten minutes. All the code and files in the example can be found in [Example](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert) of Paddle Serving.
-Paddle Serving支持基于Paddle进行训练的各种模型,并通过指定模型的输入和输出变量来保存可服务模型。为了方便,我们可以从paddlehub加载一个已经训练好的bert中文模型,并利用两行代码保存一个可部署的服务,服务端和客户端的配置分别放在`bert_seq20_model`和`bert_seq20_client`文件夹。
+#### Step1: Save the serviceable model
+Paddle Serving supports various models trained based on Paddle, and saves the serviceable model by specifying the input and output variables of the model. For convenience, we can load a trained bert Chinese model from paddlehub and save a deployable service with two lines of code. The server and client configurations are placed in the `bert_seq20_model` and` bert_seq20_client` folders, respectively.
+
+[//file]:#bert_10.py
``` python
import paddlehub as hub
model_name = "bert_chinese_L-12_H-768_A-12"
@@ -23,33 +26,35 @@ serving_io.save_model("bert_seq20_model", "bert_seq20_client",
feed_dict, fetch_dict, program)
```
-#### Step2:启动服务
+#### Step2: Launch Service
+[//file]:#server.sh
``` shell
python -m paddle_serving_server_gpu.serve --model bert_seq20_model --thread 10 --port 9292 --gpu_ids 0
```
+| Parameters | Meaning |
+| ---------- | ---------------------------------------- |
+| model | server configuration and model file path |
+| thread | server-side threads |
+| port | server port number |
+| gpu_ids | GPU index number |
-| 参数 | 含义 |
-| ------- | -------------------------- |
-| model | server端配置与模型文件路径 |
-| thread | server端线程数 |
-| port | server端端口号 |
-| gpu_ids | GPU索引号 |
-
-#### Step3:客户端数据预处理逻辑
+#### Step3: data preprocessing logic on Client Side
-Paddle Serving内建了很多经典典型对应的数据预处理逻辑,对于中文Bert语义表示的计算,我们采用paddle_serving_app下的ChineseBertReader类进行数据预处理,开发者可以很容易获得一个原始的中文句子对应的多个模型输入字段。
+Paddle Serving has many built-in corresponding data preprocessing logics. For the calculation of Chinese Bert semantic representation, we use the ChineseBertReader class under paddle_serving_app for data preprocessing. Model input fields of multiple models corresponding to a raw Chinese sentence can be easily fetched by developers
-安装paddle_serving_app
+Install paddle_serving_app
+[//file]:#pip_app.sh
```shell
pip install paddle_serving_app
```
-#### Step4:客户端访问
+#### Step4: Client Visit Serving
-客户端脚本 bert_client.py内容如下
+the script of client side bert_client.py is as follow:
+[//file]:#bert_client.py
``` python
import os
import sys
@@ -68,16 +73,33 @@ for line in sys.stdin:
result = client.predict(feed=feed_dict, fetch=fetch)
```
-执行
+run
+[//file]:#bert_10_cli.sh
```shell
cat data.txt | python bert_client.py
```
-从data.txt文件中读取样例,并将结果打印到标准输出。
+read samples from data.txt, print results at the standard output.
-### 性能测试
+### Benchmark
-我们基于V100对基于Padde Serving研发的Bert-As-Service的性能进行测试并与基于Tensorflow实现的Bert-As-Service进行对比,从用户配置的角度,采用相同的batch size和并发数进行压力测试,得到4块V100下的整体吞吐性能数据如下。
+We tested the performance of Bert-As-Service based on Padde Serving based on V100 and compared it with the Bert-As-Service based on Tensorflow. From the perspective of user configuration, we used the same batch size and concurrent number for stress testing. The overall throughput performance data obtained under 4 V100s is as follows.
![4v100_bert_as_service_benchmark](4v100_bert_as_service_benchmark.png)
+
+
diff --git a/doc/BERT_10_MINS_CN.md b/doc/BERT_10_MINS_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..e7bca9d2885b421f46dcf7af74b8967fb6d8f2ec
--- /dev/null
+++ b/doc/BERT_10_MINS_CN.md
@@ -0,0 +1,85 @@
+## 十分钟构建Bert-As-Service
+
+(简体中文|[English](./BERT_10_MINS.md))
+
+Bert-As-Service的目标是给定一个句子,服务可以将句子表示成一个语义向量返回给用户。[Bert模型](https://arxiv.org/abs/1810.04805)是目前NLP领域的热门模型,在多种公开的NLP任务上都取得了很好的效果,使用Bert模型计算出的语义向量来做其他NLP模型的输入对提升模型的表现也有很大的帮助。Bert-As-Service可以让用户很方便地获取文本的语义向量表示并应用到自己的任务中。为了实现这个目标,我们通过四个步骤说明使用Paddle Serving在十分钟内就可以搭建一个这样的服务。示例中所有的代码和文件均可以在Paddle Serving的[示例](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert)中找到。
+
+#### Step1:保存可服务模型
+
+Paddle Serving支持基于Paddle进行训练的各种模型,并通过指定模型的输入和输出变量来保存可服务模型。为了方便,我们可以从paddlehub加载一个已经训练好的bert中文模型,并利用两行代码保存一个可部署的服务,服务端和客户端的配置分别放在`bert_seq20_model`和`bert_seq20_client`文件夹。
+
+``` python
+import paddlehub as hub
+model_name = "bert_chinese_L-12_H-768_A-12"
+module = hub.Module(model_name)
+inputs, outputs, program = module.context(
+ trainable=True, max_seq_len=20)
+feed_keys = ["input_ids", "position_ids", "segment_ids",
+ "input_mask", "pooled_output", "sequence_output"]
+fetch_keys = ["pooled_output", "sequence_output"]
+feed_dict = dict(zip(feed_keys, [inputs[x] for x in feed_keys]))
+fetch_dict = dict(zip(fetch_keys, [outputs[x]] for x in fetch_keys))
+
+import paddle_serving_client.io as serving_io
+serving_io.save_model("bert_seq20_model", "bert_seq20_client",
+ feed_dict, fetch_dict, program)
+```
+
+#### Step2:启动服务
+
+``` shell
+python -m paddle_serving_server_gpu.serve --model bert_seq20_model --thread 10 --port 9292 --gpu_ids 0
+```
+
+| 参数 | 含义 |
+| ------- | -------------------------- |
+| model | server端配置与模型文件路径 |
+| thread | server端线程数 |
+| port | server端端口号 |
+| gpu_ids | GPU索引号 |
+
+#### Step3:客户端数据预处理逻辑
+
+Paddle Serving内建了很多经典典型对应的数据预处理逻辑,对于中文Bert语义表示的计算,我们采用paddle_serving_app下的ChineseBertReader类进行数据预处理,开发者可以很容易获得一个原始的中文句子对应的多个模型输入字段。
+
+安装paddle_serving_app
+
+```shell
+pip install paddle_serving_app
+```
+
+#### Step4:客户端访问
+
+客户端脚本 bert_client.py内容如下
+
+``` python
+import os
+import sys
+from paddle_serving_client import Client
+from paddle_serving_app import ChineseBertReader
+
+reader = ChineseBertReader()
+fetch = ["pooled_output"]
+endpoint_list = ["127.0.0.1:9292"]
+client = Client()
+client.load_client_config("bert_seq20_client/serving_client_conf.prototxt")
+client.connect(endpoint_list)
+
+for line in sys.stdin:
+ feed_dict = reader.process(line)
+ result = client.predict(feed=feed_dict, fetch=fetch)
+```
+
+执行
+
+```shell
+cat data.txt | python bert_client.py
+```
+
+从data.txt文件中读取样例,并将结果打印到标准输出。
+
+### 性能测试
+
+我们基于V100对基于Padde Serving研发的Bert-As-Service的性能进行测试并与基于Tensorflow实现的Bert-As-Service进行对比,从用户配置的角度,采用相同的batch size和并发数进行压力测试,得到4块V100下的整体吞吐性能数据如下。
+
+![4v100_bert_as_service_benchmark](4v100_bert_as_service_benchmark.png)
diff --git a/doc/COMPILE.md b/doc/COMPILE.md
index 986a136a8102f5fc154d04461f289b08bb0415ae..e21599c15b19cddcc50230721dcd7baca692b8ab 100644
--- a/doc/COMPILE.md
+++ b/doc/COMPILE.md
@@ -1,33 +1,35 @@
-# 如何编译PaddleServing
+# How to compile PaddleServing
-## 编译环境设置
+([简体中文](./COMPILE_CN.md)|English)
+
+## Compilation environment requirements
- os: CentOS 6u3
-- gcc: 4.8.2及以上
-- go: 1.9.2及以上
-- git:2.17.1及以上
-- cmake:3.2.2及以上
-- python:2.7.2及以上
+- gcc: 4.8.2 and later
+- go: 1.9.2 and later
+- git:2.17.1 and later
+- cmake:3.2.2 and later
+- python:2.7.2 and later
-推荐使用Docker准备Paddle Serving编译环境:[CPU Dockerfile.devel](../tools/Dockerfile.devel),[GPU Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
+It is recommended to use Docker to prepare the compilation environment for the Paddle service: [CPU Dockerfile.devel](../tools/Dockerfile.devel), [GPU Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
-## 获取代码
+## Get Code
``` python
git clone https://github.com/PaddlePaddle/Serving
cd Serving && git submodule update --init --recursive
```
-## PYTHONROOT设置
+## PYTHONROOT Setting
```shell
-# 例如python的路径为/usr/bin/python,可以设置PYTHONROOT
+# for example, the path of python is /usr/bin/python, you can set /usr as PYTHONROOT
export PYTHONROOT=/usr/
```
-## 编译Server部分
+## Compile Server
-### 集成CPU版本Paddle Inference Library
+### Integrated CPU version paddle inference library
``` shell
mkdir build && cd build
@@ -35,9 +37,9 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make -j10
```
-可以执行`make install`把目标产出放在`./output`目录下,cmake阶段需添加`-DCMAKE_INSTALL_PREFIX=./output`选项来指定存放路径。
+you can execute `make install` to put targets under directory `./output`, you need to add`-DCMAKE_INSTALL_PREFIX=./output`to specify output path to cmake command shown above.
-### 集成GPU版本Paddle Inference Library
+### Integrated GPU version paddle inference library
``` shell
mkdir build && cd build
@@ -45,9 +47,9 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make -j10
```
-执行`make install`可以把目标产出放在`./output`目录下。
+execute `make install` to put targets under directory `./output`
-## 编译Client部分
+## Compile Client
``` shell
mkdir build && cd build
@@ -55,9 +57,9 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make -j10
```
-执行`make install`可以把目标产出放在`./output`目录下。
+execute `make install` to put targets under directory `./output`
-## 编译App部分
+## Compile the App
```bash
mkdir build && cd build
@@ -65,17 +67,18 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY
make
```
-## 安装wheel包
+## Install wheel package
+
+Regardless of the client, server or App part, after compiling, install the whl package under `python/dist/`.
-无论是Client端,Server端还是App部分,编译完成后,安装`python/dist/`下的whl包即可。
+## Note
-## 注意事项
+When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`.
-运行python端Server时,会检查`SERVING_BIN`环境变量,如果想使用自己编译的二进制文件,请将设置该环境变量为对应二进制文件的路径,通常是`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`。
-## CMake选项说明
+## CMake Option Description
-| 编译选项 | 说明 | 默认 |
+| Compile Options | Description | Default |
| :--------------: | :----------------------------------------: | :--: |
| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF |
| WITH_MKL | Compile Paddle Serving with MKL support | OFF |
@@ -87,35 +90,36 @@ make
| WITH_ELASTIC_CTR | Compile ELASITC-CTR solution | OFF |
| PACK | Compile for whl | OFF |
-### WITH_GPU选项
+### WITH_GPU Option
-Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选项用于检测系统上CUDA/CUDNN等基础库,如检测到合适版本,在编译PaddlePaddle时就会编译出GPU版本的OP Kernel。
+Paddle Serving supports prediction on the GPU through the PaddlePaddle inference library. The WITH_GPU option is used to detect basic libraries such as CUDA/CUDNN on the system. If an appropriate version is detected, the GPU Kernel will be compiled when PaddlePaddle is compiled.
-在裸机上编译Paddle Serving GPU版本,需要安装这些基础库:
+To compile the Paddle Serving GPU version on bare metal, you need to install these basic libraries:
- CUDA
- CuDNN
- NCCL2
-这里要注意的是:
+Note here:
+
+1. The basic library versions such as CUDA/CUDNN installed on the system where Serving is compiled, needs to be compatible with the actual GPU device. For example, the Tesla V100 card requires at least CUDA 9.0. If the version of the basic library such as CUDA used during compilation is too low, the generated GPU code is not compatible with the actual hardware device, which will cause the Serving process to fail to start or serious problems such as coredump.
+2. Install the CUDA driver compatible with the actual GPU device on the system running Paddle Serving, and install the basic library compatible with the CUDA/CuDNN version used during compilation. If the version of CUDA/CuDNN installed on the system running Paddle Serving is lower than the version used at compile time, it may cause some cuda function call failures and other problems.
-1. 编译Serving所在的系统上所安装的CUDA/CUDNN等基础库版本,需要兼容实际的GPU设备。例如,Tesla V100卡至少要CUDA 9.0。如果编译时所用CUDA等基础库版本过低,由于生成的GPU代码和实际硬件设备不兼容,会导致Serving进程无法启动,或出现coredump等严重问题。
-2. 运行Paddle Serving的系统上安装与实际GPU设备兼容的CUDA driver,并安装与编译期所用的CUDA/CuDNN等版本兼容的基础库。如运行Paddle Serving的系统上安装的CUDA/CuDNN的版本低于编译时所用版本,可能会导致奇怪的cuda函数调用失败等问题。
-以下是PaddlePaddle发布版本所使用的基础库版本匹配关系,供参考:
+The following is the base library version matching relationship used by the PaddlePaddle release version for reference:
| | CUDA | CuDNN | NCCL2 |
| :----: | :-----: | :----------------------: | :----: |
| CUDA 8 | 8.0.61 | CuDNN 7.1.2 for CUDA 8.0 | 2.1.4 |
| CUDA 9 | 9.0.176 | CuDNN 7.3.1 for CUDA 9.0 | 2.2.12 |
-### 如何让Paddle Serving编译系统探测到CuDNN库
+### How to make the compiler detect the CuDNN library
-从NVIDIA developer官网下载对应版本CuDNN并在本地解压后,在cmake编译命令中增加`-DCUDNN_ROOT`参数,指定CuDNN库所在路径。
+Download the corresponding CUDNN version from NVIDIA developer official website and decompressing it, add `-DCUDNN_ROOT` to cmake command, to specify the path of CUDNN.
-### 如何让Paddle Serving编译系统探测到nccl库
+### How to make the compiler detect the nccl library
-从NVIDIA developer官网下载对应版本nccl2库并解压后,增加如下环境变量 (以nccl2.1.4为例):
+After downloading the corresponding version of the nccl2 library from the NVIDIA developer official website and decompressing it, add the following environment variables (take nccl2.1.4 as an example):
```shell
export C_INCLUDE_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/include:$C_INCLUDE_PATH
diff --git a/doc/COMPILE_CN.md b/doc/COMPILE_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..bbe509f7c09e9e9082f1e7a2bfa6b823af7c2cc0
--- /dev/null
+++ b/doc/COMPILE_CN.md
@@ -0,0 +1,126 @@
+# 如何编译PaddleServing
+
+(简体中文|[English](./COMPILE.md))
+
+## 编译环境设置
+
+- os: CentOS 6u3
+- gcc: 4.8.2及以上
+- go: 1.9.2及以上
+- git:2.17.1及以上
+- cmake:3.2.2及以上
+- python:2.7.2及以上
+
+推荐使用Docker准备Paddle Serving编译环境:[CPU Dockerfile.devel](../tools/Dockerfile.devel),[GPU Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
+
+## 获取代码
+
+``` python
+git clone https://github.com/PaddlePaddle/Serving
+cd Serving && git submodule update --init --recursive
+```
+
+## PYTHONROOT设置
+
+```shell
+# 例如python的路径为/usr/bin/python,可以设置PYTHONROOT
+export PYTHONROOT=/usr/
+```
+
+## 编译Server部分
+
+### 集成CPU版本Paddle Inference Library
+
+``` shell
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DSERVER=ON ..
+make -j10
+```
+
+可以执行`make install`把目标产出放在`./output`目录下,cmake阶段需添加`-DCMAKE_INSTALL_PREFIX=./output`选项来指定存放路径。
+
+### 集成GPU版本Paddle Inference Library
+
+``` shell
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DSERVER=ON -DWITH_GPU=ON ..
+make -j10
+```
+
+执行`make install`可以把目标产出放在`./output`目录下。
+
+## 编译Client部分
+
+``` shell
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCLIENT=ON ..
+make -j10
+```
+
+执行`make install`可以把目标产出放在`./output`目录下。
+
+## 编译App部分
+
+```bash
+mkdir build && cd build
+cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PYTHONROOT/lib/libpython2.7.so -DPYTHON_EXECUTABLE=$PYTHONROOT/bin/python -DCMAKE_INSTALL_PREFIX=./output -DAPP=ON ..
+make
+```
+
+## 安装wheel包
+
+无论是Client端,Server端还是App部分,编译完成后,安装`python/dist/`下的whl包即可。
+
+## 注意事项
+
+运行python端Server时,会检查`SERVING_BIN`环境变量,如果想使用自己编译的二进制文件,请将设置该环境变量为对应二进制文件的路径,通常是`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`。
+
+## CMake选项说明
+
+| 编译选项 | 说明 | 默认 |
+| :--------------: | :----------------------------------------: | :--: |
+| WITH_AVX | Compile Paddle Serving with AVX intrinsics | OFF |
+| WITH_MKL | Compile Paddle Serving with MKL support | OFF |
+| WITH_GPU | Compile Paddle Serving with NVIDIA GPU | OFF |
+| CUDNN_ROOT | Define CuDNN library and header path | |
+| CLIENT | Compile Paddle Serving Client | OFF |
+| SERVER | Compile Paddle Serving Server | OFF |
+| APP | Compile Paddle Serving App package | OFF |
+| WITH_ELASTIC_CTR | Compile ELASITC-CTR solution | OFF |
+| PACK | Compile for whl | OFF |
+
+### WITH_GPU选项
+
+Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选项用于检测系统上CUDA/CUDNN等基础库,如检测到合适版本,在编译PaddlePaddle时就会编译出GPU版本的OP Kernel。
+
+在裸机上编译Paddle Serving GPU版本,需要安装这些基础库:
+
+- CUDA
+- CuDNN
+- NCCL2
+
+这里要注意的是:
+
+1. 编译Serving所在的系统上所安装的CUDA/CUDNN等基础库版本,需要兼容实际的GPU设备。例如,Tesla V100卡至少要CUDA 9.0。如果编译时所用CUDA等基础库版本过低,由于生成的GPU代码和实际硬件设备不兼容,会导致Serving进程无法启动,或出现coredump等严重问题。
+2. 运行Paddle Serving的系统上安装与实际GPU设备兼容的CUDA driver,并安装与编译期所用的CUDA/CuDNN等版本兼容的基础库。如运行Paddle Serving的系统上安装的CUDA/CuDNN的版本低于编译时所用版本,可能会导致奇怪的cuda函数调用失败等问题。
+
+以下是PaddlePaddle发布版本所使用的基础库版本匹配关系,供参考:
+
+| | CUDA | CuDNN | NCCL2 |
+| :----: | :-----: | :----------------------: | :----: |
+| CUDA 8 | 8.0.61 | CuDNN 7.1.2 for CUDA 8.0 | 2.1.4 |
+| CUDA 9 | 9.0.176 | CuDNN 7.3.1 for CUDA 9.0 | 2.2.12 |
+
+### 如何让Paddle Serving编译系统探测到CuDNN库
+
+从NVIDIA developer官网下载对应版本CuDNN并在本地解压后,在cmake编译命令中增加`-DCUDNN_ROOT`参数,指定CuDNN库所在路径。
+
+### 如何让Paddle Serving编译系统探测到nccl库
+
+从NVIDIA developer官网下载对应版本nccl2库并解压后,增加如下环境变量 (以nccl2.1.4为例):
+
+```shell
+export C_INCLUDE_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/include:$C_INCLUDE_PATH
+export CPLUS_INCLUDE_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/include:$CPLUS_INCLUDE_PATH
+export LD_LIBRARY_PATH=/path/to/nccl2/cuda8/nccl_2.1.4-1+cuda8.0_x86_64/lib/:$LD_LIBRARY_PATH
+```
diff --git a/doc/DESIGN_DOC_EN.md b/doc/DESIGN_DOC_EN.md
deleted file mode 100644
index 2f8a36ea6686b5add2a7e4e407eabfd14167490d..0000000000000000000000000000000000000000
--- a/doc/DESIGN_DOC_EN.md
+++ /dev/null
@@ -1,227 +0,0 @@
-# Paddle Serving Design Doc
-
-## 1. Design Objectives
-
-- Long Term Vision: Online deployment of deep learning models will be a user-facing application in the future. Any AI developer will face the problem of deploying an online service for his or her trained model.
-Paddle Serving is the official open source online deployment framework. The long term goal of Paddle Serving is to provide professional, reliable and easy-to-use online service to the last mile of AI application.
-
-- Easy-To-Use: For algorithmic developers to quickly deploy their models online, Paddle Serving designs APIs that can be used with Paddle's training process seamlessly, most Paddle models can be deployed as a service with one line command.
-
-- Industrial Oriented: To meet industrial deployment requirements, Paddle Serving supports lots of large-scale deployment functions: 1) Distributed Sparse Embedding Indexing. 2) Highly concurrent underlying communications. 3) Model Management, online A/B test, model online loading.
-
-- Extensibility: Paddle Serving supports C++, Python and Golang client, and will support more clients with different languages. It is very easy to extend Paddle Serving to support other machine learning inference library, although currently Paddle inference library is the only official supported inference backend.
-
-
-## 2. Module design and implementation
-
-### 2.1 Python API interface design
-
-#### 2.1.1 save a servable model
-The inference phase of Paddle model focuses on 1) input variables of the model. 2) output variables of the model. 3) model structure and model parameters. Paddle Serving Python API provides a `save_model` interface for trained model, and save necessary information for Paddle Serving to use during deployment phase. An example is as follows:
-
-``` python
-import paddle_serving_client.io as serving_io
-serving_io.save_model("serving_model", "client_conf",
- {"words": data}, {"prediction": prediction},
- fluid.default_main_program())
-```
-In the example, `{"words": data}` and `{"prediction": prediction}` assign the inputs and outputs of a model. `"words"` and `"prediction"` are alias names of inputs and outputs. The design of alias name is to help developers to memorize model inputs and model outputs. `data` and `prediction` are Paddle `[Variable](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Variable_cn.html#variable)` in training phase that often represents ([Tensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/fluid_cn/Tensor_cn.html#tensor)) or ([LodTensor](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/lod_tensor.html#lodtensor)). When the `save_model` API is called, two directories called `"serving_model"` and `"client_conf"` will be generated. The content of the saved model is as follows:
-
-``` shell
-.
-├── client_conf
-│ ├── serving_client_conf.prototxt
-│ └── serving_client_conf.stream.prototxt
-└── serving_model
- ├── embedding_0.w_0
- ├── fc_0.b_0
- ├── fc_0.w_0
- ├── fc_1.b_0
- ├── fc_1.w_0
- ├── fc_2.b_0
- ├── fc_2.w_0
- ├── lstm_0.b_0
- ├── lstm_0.w_0
- ├── __model__
- ├── serving_server_conf.prototxt
- └── serving_server_conf.stream.prototxt
-```
-`"serving_client_conf.prototxt"` and `"serving_server_conf.prototxt"` are the client side and the server side configurations of Paddle Serving, and `"serving_client_conf.stream.prototxt"` and `"serving_server_conf.stream.prototxt"` are the corresponding parts. Other contents saved in the directory are the same as Paddle saved inference model. We are considering to support `save_model` interface in Paddle training framework so that a user is not aware of the servable configurations.
-
-#### 2.1.2 Model loading on the server side
-
-Prediction logics on the server side can be defined through Paddle Serving Server API with a few lines of code, an example is as follows:
-``` python
-import paddle_serving_server as serving
-op_maker = serving.OpMaker()
-read_op = op_maker.create('general_reader')
-dist_kv_op = op_maker.create('general_dist_kv')
-general_infer_op = op_maker.create('general_infer')
-general_response_op = op_maker.create('general_response')
-
-op_seq_maker = serving.OpSeqMaker()
-op_seq_maker.add_op(read_op)
-op_seq_maker.add_op(dist_kv_op)
-op_seq_maker.add_op(general_infer_op)
-op_seq_maker.add_op(general_response_op)
-```
-Current Paddle Serving supports operator list on the server side as follows:
-
-
-
-| Op Name | Description |
-|--------------|------|
-| `general_reader` | General Data Reading Operator |
-| `genreal_infer` | General Data Inference with Paddle Operator |
-| `general_response` | General Data Response Operator |
-| `general_dist_kv` | Distributed Sparse Embedding Indexing |
-
-
-
-Paddle Serving supports inference engine on multiple devices. Current supports are CPU and GPU engine. Docker Images of CPU and GPU are provided officially. User can use one line command to start an inference service either on CPU or on GPU.
-
-``` shell
-python -m paddle_serving_server.serve --model your_servable_model --thread 10 --port 9292
-```
-``` shell
-python -m paddle_serving_server_gpu.serve --model your_servable_model --thread 10 --port 9292
-```
-
-Options of startup command are listed below:
-
-
-| Arguments | Types | Defaults | Descriptions |
-|--------------|------|-----------|--------------------------------|
-| `thread` | int | `4` | Concurrency on server side, usually equal to the number of CPU core |
-| `port` | int | `9292` | Port exposed to users |
-| `name` | str | `""` | Service name that if a user specifies, the name of HTTP service is allocated |
-| `model` | str | `""` | Servable models for Paddle Serving |
-| `gpu_ids` | str | `""` | Supported only in paddle_serving_server_gpu, similar to the usage of CUDA_VISIBLE_DEVICES |
-
-
-
-For example, `python -m paddle_serving_server.serve --model your_servable_model --thread 10 --port 9292` is the same as the following code as user can define:
-``` python
-from paddle_serving_server import OpMaker, OpSeqMaker, Server
-
-op_maker = OpMaker()
-read_op = op_maker.create('general_reader')
-general_infer_op = op_maker.create('general_infer')
-general_response_op = op_maker.create('general_response')
-op_seq_maker = OpSeqMaker()
-op_seq_maker.add_op(read_op)
-op_seq_maker.add_op(general_infer_op)
-op_seq_maker.add_op(general_response_op)
-server = Server()
-server.set_op_sequence(op_seq_maker.get_op_sequence())
-server.set_num_threads(10)
-server.load_model_config(”your_servable_model“)
-server.prepare_server(port=9292, device="cpu")
-server.run_server()
-```
-
-#### 2.1.3 Paddle Serving Client API
-Paddle Serving supports remote service access through RPC(remote procedure call) and HTTP. RPC access of remote service can be called through Client API of Paddle Serving. A user can define data preprocess function before calling Paddle Serving's client API. The example below explains how to define the input data of Paddle Serving Client. The servable model has two inputs with alias name of `sparse` and `dense`. `sparse` corresponds to sparse sequence ids such as `[1, 1001, 100001]` and `dense` corresponds to dense vector such as `[0.2, 0.5, 0.1, 0.4, 0.11, 0.22]`. For sparse sequence data, current design supports `lod_level=0` and `lod_level=1` of Paddle, that corresponds to `Tensor` and `LodTensor`. For dense vector, current design supports any `N-D Tensor`. Users do not need to assign the shape of inference model input. The Paddle Serving Client API will check the input data's shape with servable configurations.
-
-``` python
-feed_dict["sparse"] = [1, 1001, 100001]
-feed_dict["dense"] = [0.2, 0.5, 0.1, 0.4, 0.11, 0.22]
-fetch_map = client.predict(feed=feed_dict, fetch=["prob"])
-```
-
-The following code sample shows that Paddle Serving Client API connects to Server API with endpoint of the servers. To use the data parallelism ability during prediction, Paddle Serving Client allows users to define multiple server endpoints.
-``` python
-client = Client()
-client.load_client_config('servable_client_configs')
-client.connect(["127.0.0.1:9292"])
-```
-
-### 2.2 Underlying Communication Mechanism
-Paddle Serving adopts [baidu-rpc](https://github.com/apache/incubator-brpc) as underlying communication layer. baidu-rpc is an open-source RPC communication library with high concurrency and low latency advantages compared with other open source RPC library. Millions of instances and thousands of services are using baidu-rpc within Baidu.
-
-### 2.3 Core Execution Engine
-The core execution engine of Paddle Serving is a Directed acyclic graph(DAG). In the DAG, each node represents a phase of inference service, such as paddle inference prediction, data preprocessing and data postprocessing. DAG can fully parallelize the computation efficiency and can fully utilize the computation resources. For example, when a user has input data that needs to be feed into two models, and combine the scores of the two models, the computation of model scoring is parallelized through DAG.
-
-
-
-
-
-
-
-### 2.4 Micro service plugin
-The underlying communication of Paddle Serving is implemented with C++ as well as the core framework, it is hard for users who do not familiar with C++ to implement new Paddle Serving Server Operators. Another approach is to use the light-weighted Web Service in Paddle Serving Server that can be viewed as a plugin. A user can implement complex data preprocessing and postprocessing logics to build a complex AI service. If access of the AI service has a large volumn, it is worth to implement the service with high performance Paddle Serving Server operators. The relationship between Web Service and RPC Service can be referenced in `User Type`.
-
-## 3. Industrial Features
-
-### 3.1 Distributed Sparse Parameter Indexing
-
-Distributed Sparse Parameter Indexing is commonly seen in advertising and recommendation scenarios, and is often used coupled with distributed training. The figure below explains a commonly seen architecture for online recommendation. When the recommendation service receives a request from a user, the system will automatically collects training log for the offline distributed online training. Mean while, the request is sent to Paddle Serving Server. For sparse features, distributed sparse parameter index service is called so that sparse parameters can be looked up. The dense input features together with the looked up sparse model parameters are fed into the Paddle Inference Node of the DAG in Paddle Serving Server. Then the score can be responsed through RPC to product service for item ranking.
-
-
-
-
-
-
-
-Why do we need to support distributed sparse parameter indexing in Paddle Serving? 1) In some recommendation scenarios, the number of features can be up to hundreds of billions that a single node can not hold the parameters within random access memory. 2) Paddle Serving supports distributed sparse parameter indexing that can couple with paddle inference. Users do not need to do extra work to have a low latency inference engine with hundreds of billions of parameters.
-
-### 3.2 Model Management, online A/B test, Model Online Reloading
-
-Paddle Serving's C++ engine supports model management, online A/B test and model online reloading. Currently, python API is not released yet, please wait for the next release.
-
-## 4. User Types
-Paddle Serving provides RPC and HTTP protocol for users. For HTTP service, we recommend users with median or small traffic services to use, and the latency is not a strict requirement. For RPC protocol, we recommend high traffic services and low latency required services to use. For users who use distributed sparse parameter indexing built-in service, it is not necessary to care about the underlying details of communication. The following figure gives out several scenarios that user may want to use Paddle Serving.
-
-
-
-
-
-
-
-For servable models saved from Paddle Serving IO API, users do not need to do extra coding work to startup a service, but may need some coding work on the client side. For development of Web Service plugin, a user needs to provide implementation of Web Service's preprocessing and postprocessing work if needed to get a HTTP service.
-
-### 4.1 Web Service Development
-
-Web Service has lots of open sourced framework. Currently Paddle Serving uses Flask as built-in service framework, and users are not aware of this. More efficient web service will be integrated in the furture if needed.
-
-``` python
-from paddle_serving_server.web_service import WebService
-from imdb_reader import IMDBDataset
-import sys
-
-
-class IMDBService(WebService):
- def prepare_dict(self, args={}):
- if len(args) == 0:
- exit(-1)
- self.dataset = IMDBDataset()
- self.dataset.load_resource(args["dict_file_path"])
-
- def preprocess(self, feed={}, fetch=[]):
- if "words" not in feed:
- exit(-1)
- res_feed = {}
- res_feed["words"] = self.dataset.get_words_only(feed["words"])[0]
- return res_feed, fetch
-
-
-imdb_service = IMDBService(name="imdb")
-imdb_service.load_model_config(sys.argv[1])
-imdb_service.prepare_server(
- workdir=sys.argv[2], port=int(sys.argv[3]), device="cpu")
-imdb_service.prepare_dict({"dict_file_path": sys.argv[4]})
-imdb_service.run_server()
-```
-
-`WebService` is a Base Class, providing inheritable interfaces such `preprocess` and `postprocess` for users to implement. In the inherited class of `WebService` class, users can define any functions they want and the startup function interface is the same as RPC service.
-
-## 5. Future Plan
-
-### 5.1 Open DAG definition API
-Current version of Paddle Serving Server supports sequential type of execution flow. DAG definition API can be more helpful to users on complex tasks.
-
-### 5.2 Auto Deployment on Cloud
-In order to make deployment more easily on public cloud, Paddle Serving considers to provides Operators on Kubernetes in submitting a service job.
-
-### 5.3 Vector Indexing and Tree based Indexing
-In recommendation and advertisement systems, it is commonly seen to use vector based index or tree based indexing service to do candidate retrievals. These retrieval tasks will be built-in services of Paddle Serving.
diff --git a/doc/DOCKER.md b/doc/DOCKER.md
index 325ec906c04c708d8e62ff2ae2900bc367e049b6..0e865c66e2da32a4e0ed15df9f2632b98ffbcedf 100644
--- a/doc/DOCKER.md
+++ b/doc/DOCKER.md
@@ -1,53 +1,55 @@
-# Docker编译环境准备
+# Docker compilation environment preparation
-## 环境要求
+([简体中文](./DOCKER_CN.md)|English)
-+ 开发机上已安装Docker。
-+ 编译GPU版本需要安装nvidia-docker。
+## Environmental requirements
-## Dockerfile文件
++ Docker is installed on the development machine.
++ Compiling the GPU version requires nvidia-docker.
-[CPU版本Dockerfile](../Dockerfile)
+## Dockerfile
-[GPU版本Dockerfile](../Dockerfile.gpu)
+[CPU Version Dockerfile](../tools/Dockerfile)
-## 使用方法
+[GPU Version Dockerfile](../tools/Dockerfile.gpu)
-### 构建Docker镜像
+## Instructions
-建立新目录,复制Dockerfile内容到该目录下Dockerfile文件。
+### Building Docker Image
-执行
+Create a new directory and copy the Dockerfile to this directory.
+
+Run
```bash
docker build -t serving_compile:cpu .
```
-或者
+Or
```bash
docker build -t serving_compile:cuda9 .
```
-## 进入Docker
+## Enter Docker Container
-CPU版本请执行
+CPU Version please run
```bash
docker run -it serving_compile:cpu bash
```
-GPU版本请执行
+GPU Version please run
```bash
docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
```
-## Docker编译出的可执行文件支持的环境列表
+## List of supported environments compiled by Docker
-经过验证的环境列表如下:
+The list of supported environments is as follows::
-| CPU Docker编译出的可执行文件支持的系统环境 |
+| System Environment Supported by CPU Docker Compiled Executables |
| -------------------------- |
| Centos6 |
| Centos7 |
@@ -56,7 +58,7 @@ docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
-| GPU Docker编译出的可执行文件支持的系统环境 |
+| System Environment Supported by GPU Docker Compiled Executables |
| ---------------------------------- |
| Centos6_cuda9_cudnn7 |
| Centos7_cuda9_cudnn7 |
@@ -65,6 +67,6 @@ docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
-**备注:**
-+ 若执行预编译版本出现找不到libcrypto.so.10、libssl.so.10的情况,可以将Docker环境中的/usr/lib64/libssl.so.10与/usr/lib64/libcrypto.so.10复制到可执行文件所在目录。
-+ CPU预编译版本仅可在CPU机器上执行,GPU预编译版本仅可在GPU机器上执行。
+**Remarks:**
++ If you cannot find libcrypto.so.10 and libssl.so.10 when you execute the pre-compiled version, you can change /usr/lib64/libssl.so.10 and /usr/lib64/libcrypto.so in the Docker environment. 10 Copy to the directory where the executable is located.
++ CPU pre-compiled version can only be executed on CPU machines, GPU pre-compiled version can only be executed on GPU machines.
diff --git a/doc/DOCKER_CN.md b/doc/DOCKER_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..92cc3ac6ea34d6399d6204ff7b9ec2d12b412601
--- /dev/null
+++ b/doc/DOCKER_CN.md
@@ -0,0 +1,72 @@
+# Docker编译环境准备
+
+(简体中文|[English](./DOCKER.md))
+
+## 环境要求
+
++ 开发机上已安装Docker。
++ 编译GPU版本需要安装nvidia-docker。
+
+## Dockerfile文件
+
+[CPU版本Dockerfile](../tools/Dockerfile)
+
+[GPU版本Dockerfile](../tools/Dockerfile.gpu)
+
+## 使用方法
+
+### 构建Docker镜像
+
+建立新目录,复制Dockerfile内容到该目录下Dockerfile文件。
+
+执行
+
+```bash
+docker build -t serving_compile:cpu .
+```
+
+或者
+
+```bash
+docker build -t serving_compile:cuda9 .
+```
+
+## 进入Docker
+
+CPU版本请执行
+
+```bash
+docker run -it serving_compile:cpu bash
+```
+
+GPU版本请执行
+
+```bash
+docker run -it --runtime=nvidia -it serving_compile:cuda9 bash
+```
+
+## Docker编译出的可执行文件支持的环境列表
+
+经过验证的环境列表如下:
+
+| CPU Docker编译出的可执行文件支持的系统环境 |
+| -------------------------- |
+| Centos6 |
+| Centos7 |
+| Ubuntu16.04 |
+| Ubuntu18.04 |
+
+
+
+| GPU Docker编译出的可执行文件支持的系统环境 |
+| ---------------------------------- |
+| Centos6_cuda9_cudnn7 |
+| Centos7_cuda9_cudnn7 |
+| Ubuntu16.04_cuda9_cudnn7 |
+| Ubuntu16.04_cuda10_cudnn7 |
+
+
+
+**备注:**
++ 若执行预编译版本出现找不到libcrypto.so.10、libssl.so.10的情况,可以将Docker环境中的/usr/lib64/libssl.so.10与/usr/lib64/libcrypto.so.10复制到可执行文件所在目录。
++ CPU预编译版本仅可在CPU机器上执行,GPU预编译版本仅可在GPU机器上执行。
diff --git a/doc/IMDB_GO_CLIENT.md b/doc/IMDB_GO_CLIENT.md
index bf21ff64a4f4c4fbb5c1994affe51dd918d14123..a9f610cfb154548ffe6f89820c1f61b114303351 100644
--- a/doc/IMDB_GO_CLIENT.md
+++ b/doc/IMDB_GO_CLIENT.md
@@ -1,5 +1,7 @@
# How to use Go Client of Paddle Serving
+([简体中文](./IMDB_GO_CLIENT_CN.md)|English)
+
This document shows how to use Go as your client language. For Go client in Paddle Serving, a simple client package is provided https://github.com/PaddlePaddle/Serving/tree/develop/go/serving_client, a user can import this package as needed. Here is a simple example of sentiment analysis task based on IMDB dataset.
### Install
@@ -15,7 +17,7 @@ pip install paddle-serving-server
### Download Text Classification Model
``` shell
-wget https://paddle-serving.bj.bcebos.com/data%2Ftext_classification%2Fimdb_serving_example.tar.gz
+wget https://paddle-serving.bj.bcebos.com/data/text_classification/imdb_serving_example.tar.gz
tar -xzf imdb_serving_example.tar.gz
```
diff --git a/doc/NEW_OPERATOR.md b/doc/NEW_OPERATOR.md
index f839be94aaa2ae9993d935c0af69bcde33b9d66f..ab1ff42adea44eec26e84bd4356bc4313d420ce2 100644
--- a/doc/NEW_OPERATOR.md
+++ b/doc/NEW_OPERATOR.md
@@ -1,5 +1,7 @@
# How to write an general operator?
+([简体中文](./NEW_OPERATOR_CN.md)|English)
+
In this document, we mainly focus on how to develop a new server side operator for PaddleServing. Before we start to write a new operator, let's look at some sample code to get the basic idea of writing a new operator for server. We assume you have known the basic computation logic on server side of PaddleServing, please reference to []() if you do not know much about it. The following code can be visited at `core/general-server/op` of Serving repo.
``` c++
diff --git a/doc/NEW_OPERATOR_CN.md b/doc/NEW_OPERATOR_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..d659b5f328cfbfc48ec7f3016037b12f34139b73
--- /dev/null
+++ b/doc/NEW_OPERATOR_CN.md
@@ -0,0 +1,149 @@
+# 如何开发一个新的General Op?
+
+(简体中文|[English](./NEW_OPERATOR.md))
+
+在本文档中,我们主要集中于如何为Paddle Serving开发新的服务器端运算符。 在开始编写新运算符之前,让我们看一些示例代码以获得为服务器编写新运算符的基本思想。 我们假设您已经知道Paddle Serving服务器端的基本计算逻辑。 下面的代码您可以在 Serving代码库下的 `core/general-server/op` 目录查阅。
+
+
+``` c++
+// Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#pragma once
+#include
+#include
+#ifdef BCLOUD
+#ifdef WITH_GPU
+#include "paddle/paddle_inference_api.h"
+#else
+#include "paddle/fluid/inference/api/paddle_inference_api.h"
+#endif
+#else
+#include "paddle_inference_api.h" // NOLINT
+#endif
+#include "core/general-server/general_model_service.pb.h"
+#include "core/general-server/op/general_infer_helper.h"
+
+namespace baidu {
+namespace paddle_serving {
+namespace serving {
+
+class GeneralInferOp
+ : public baidu::paddle_serving::predictor::OpWithChannel {
+ public:
+ typedef std::vector TensorVector;
+
+ DECLARE_OP(GeneralInferOp);
+
+ int inference();
+
+};
+
+} // namespace serving
+} // namespace paddle_serving
+} // namespace baidu
+```
+
+## 定义一个Op
+
+上面的头文件声明了一个名为`GeneralInferOp`的PaddleServing运算符。 在运行时,将调用函数 `int inference()`。 通常,我们将服务器端运算符定义为baidu::paddle_serving::predictor::OpWithChannel的子类,并使用 `GeneralBlob` 数据结构。
+
+## 在Op之间使用 `GeneralBlob`
+
+`GeneralBlob` 是一种可以在服务器端运算符之间使用的数据结构。 `tensor_vector`是`GeneralBlob`中最重要的数据结构。 服务器端的操作员可以将多个`paddle::PaddleTensor`作为输入,并可以将多个`paddle::PaddleTensor`作为输出。 特别是,`tensor_vector`可以在没有内存拷贝的操作下输入到Paddle推理引擎中。
+
+``` c++
+struct GeneralBlob {
+ std::vector tensor_vector;
+ int64_t time_stamp[20];
+ int p_size = 0;
+
+ int _batch_size;
+
+ void Clear() {
+ size_t tensor_count = tensor_vector.size();
+ for (size_t ti = 0; ti < tensor_count; ++ti) {
+ tensor_vector[ti].shape.clear();
+ }
+ tensor_vector.clear();
+ }
+
+ int SetBatchSize(int batch_size) { _batch_size = batch_size; }
+
+ int GetBatchSize() const { return _batch_size; }
+ std::string ShortDebugString() const { return "Not implemented!"; }
+};
+```
+
+### 实现 `int Inference()`
+
+``` c++
+int GeneralInferOp::inference() {
+ VLOG(2) << "Going to run inference";
+ const GeneralBlob *input_blob = get_depend_argument(pre_name());
+ VLOG(2) << "Get precedent op name: " << pre_name();
+ GeneralBlob *output_blob = mutable_data();
+
+ if (!input_blob) {
+ LOG(ERROR) << "Failed mutable depended argument, op:" << pre_name();
+ return -1;
+ }
+
+ const TensorVector *in = &input_blob->tensor_vector;
+ TensorVector *out = &output_blob->tensor_vector;
+ int batch_size = input_blob->GetBatchSize();
+ VLOG(2) << "input batch size: " << batch_size;
+
+ output_blob->SetBatchSize(batch_size);
+
+ VLOG(2) << "infer batch size: " << batch_size;
+
+ Timer timeline;
+ int64_t start = timeline.TimeStampUS();
+ timeline.Start();
+
+ if (InferManager::instance().infer(GENERAL_MODEL_NAME, in, out, batch_size)) {
+ LOG(ERROR) << "Failed do infer in fluid model: " << GENERAL_MODEL_NAME;
+ return -1;
+ }
+
+ int64_t end = timeline.TimeStampUS();
+ CopyBlobInfo(input_blob, output_blob);
+ AddBlobInfo(output_blob, start);
+ AddBlobInfo(output_blob, end);
+ return 0;
+}
+DEFINE_OP(GeneralInferOp);
+```
+
+`input_blob` 和 `output_blob` 都有很多的 `paddle::PaddleTensor`, 且Paddle预测库会被 `InferManager::instance().infer(GENERAL_MODEL_NAME, in, out, batch_size)`调用。此函数中的其他大多数代码都与性能分析有关,将来我们也可能会删除多余的代码。
+
+
+基本上,以上代码可以实现一个新的运算符。如果您想访问字典资源,可以参考`core/predictor/framework/resource.cpp`来添加全局可见资源。资源的初始化在启动服务器的运行时执行。
+
+## 定义 Python API
+
+在服务器端为Paddle Serving定义C++运算符后,最后一步是在Python API中为Paddle Serving服务器API添加注册, `python/paddle_serving_server/__init__.py`文件里有关于API注册的代码如下
+
+``` python
+self.op_dict = {
+ "general_infer": "GeneralInferOp",
+ "general_reader": "GeneralReaderOp",
+ "general_response": "GeneralResponseOp",
+ "general_text_reader": "GeneralTextReaderOp",
+ "general_text_response": "GeneralTextResponseOp",
+ "general_single_kv": "GeneralSingleKVOp",
+ "general_dist_kv": "GeneralDistKVOp"
+ }
+```
diff --git a/doc/SAVE.md b/doc/SAVE.md
index 59464a4e7c1931291d4a21b8d9d802a07dd22ec6..c1e6b19a45c75a64207802984f52c734d44f8fc8 100644
--- a/doc/SAVE.md
+++ b/doc/SAVE.md
@@ -1,4 +1,7 @@
## How to save a servable model of Paddle Serving?
+
+([简体中文](./SAVE_CN.md)|English)
+
- Currently, paddle serving provides a save_model interface for users to access, the interface is similar with `save_inference_model` of Paddle.
``` python
import paddle_serving_client.io as serving_io
diff --git a/doc/SAVE_CN.md b/doc/SAVE_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..0e2ecd5b71b860e887027564940e9e64522e097f
--- /dev/null
+++ b/doc/SAVE_CN.md
@@ -0,0 +1,31 @@
+## 怎样保存用于Paddle Serving的模型?
+
+(简体中文|[English](./SAVE.md))
+
+- 目前,Paddle服务提供了一个save_model接口供用户访问,该接口与Paddle的`save_inference_model`类似。
+
+``` python
+import paddle_serving_client.io as serving_io
+serving_io.save_model("imdb_model", "imdb_client_conf",
+ {"words": data}, {"prediction": prediction},
+ fluid.default_main_program())
+```
+imdb_model是具有服务配置的服务器端模型。 imdb_client_conf是客户端rpc配置。 Serving有一个 提供给用户存放Feed和Fetch变量信息的字典。 在示例中,`{words”:data}` 是用于指定已保存推理模型输入的提要字典。`{"prediction":projection}`是指定保存的推理模型输出的字典。可以为feed和fetch变量定义一个别名。 如何使用别名的例子 示例如下:
+
+ ``` python
+ from paddle_serving_client import Client
+import sys
+
+client = Client()
+client.load_client_config(sys.argv[1])
+client.connect(["127.0.0.1:9393"])
+
+for line in sys.stdin:
+ group = line.strip().split()
+ words = [int(x) for x in group[1:int(group[0]) + 1]]
+ label = [int(group[-1])]
+ feed = {"words": words, "label": label}
+ fetch = ["acc", "cost", "prediction"]
+ fetch_map = client.predict(feed=feed, fetch=fetch)
+ print("{} {}".format(fetch_map["prediction"][1], label[0]))
+ ```
diff --git a/doc/SERVER_DAG.md b/doc/SERVER_DAG.md
index 836444ea86ee03c56e32404fe9baadd4bf283443..5a5c851efacc28e5419d262ca671c83ec61e2015 100644
--- a/doc/SERVER_DAG.md
+++ b/doc/SERVER_DAG.md
@@ -1,5 +1,7 @@
# Computation Graph On Server
+([简体中文](./SERVER_DAG_CN.md)|English)
+
This document shows the concept of computation graph on server. How to define computation graph with PaddleServing built-in operators. Examples for some sequential execution logics are shown as well.
## Computation Graph on Server
diff --git a/doc/SERVER_DAG_CN.md b/doc/SERVER_DAG_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..3bf42ef8e3fbcb8c509a69bfe6aea12f78dc4567
--- /dev/null
+++ b/doc/SERVER_DAG_CN.md
@@ -0,0 +1,58 @@
+# Server端的计算图
+
+(简体中文|[English](./SERVER_DAG.md))
+
+本文档显示了Server端上计算图的概念。 如何使用PaddleServing内置运算符定义计算图。 还显示了一些顺序执行逻辑的示例。
+
+## Server端的计算图
+
+深度神经网络通常在输入数据上有一些预处理步骤,而在模型推断分数上有一些后处理步骤。 由于深度学习框架现在非常灵活,因此可以在训练计算图之外进行预处理和后处理。 如果要在服务器端进行输入数据预处理和推理结果后处理,则必须在服务器上添加相应的计算逻辑。 此外,如果用户想在多个模型上使用相同的输入进行推理,则最好的方法是在仅提供一个客户端请求的情况下在服务器端同时进行推理,这样我们可以节省一些网络计算开销。 由于以上两个原因,自然而然地将有向无环图(DAG)视为服务器推理的主要计算方法。 DAG的一个示例如下:
+
+
+
+
+
+## 如何定义节点
+
+PaddleServing在框架中具有一些预定义的计算节点。 一种非常常用的计算图是简单的reader-infer-response模式,可以涵盖大多数单一模型推理方案。 示例图和相应的DAG定义代码如下。
+
+
+
+
+``` python
+import paddle_serving_server as serving
+op_maker = serving.OpMaker()
+read_op = op_maker.create('general_reader')
+general_infer_op = op_maker.create('general_infer')
+general_response_op = op_maker.create('general_response')
+
+op_seq_maker = serving.OpSeqMaker()
+op_seq_maker.add_op(read_op)
+op_seq_maker.add_op(general_infer_op)
+op_seq_maker.add_op(general_response_op)
+```
+
+由于该代码在大多数情况下都会被使用,并且用户不必更改代码,因此PaddleServing会发布一个易于使用的启动命令来启动服务。 示例如下:
+
+``` python
+python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292
+```
+
+## 更多示例
+
+如果用户将稀疏特征作为输入,并且模型将对每个特征进行嵌入查找,则我们可以进行分布式嵌入查找操作,该操作不在Paddle训练计算图中。 示例如下:
+
+``` python
+import paddle_serving_server as serving
+op_maker = serving.OpMaker()
+read_op = op_maker.create('general_reader')
+dist_kv_op = op_maker.create('general_dist_kv')
+general_infer_op = op_maker.create('general_infer')
+general_response_op = op_maker.create('general_response')
+
+op_seq_maker = serving.OpSeqMaker()
+op_seq_maker.add_op(read_op)
+op_seq_maker.add_op(dist_kv_op)
+op_seq_maker.add_op(general_infer_op)
+op_seq_maker.add_op(general_response_op)
+```
diff --git a/doc/TRAIN_TO_SERVICE_CN.md b/doc/TRAIN_TO_SERVICE_CN.md
index ad2a43c30b1cd0d4701ebb3c8b3a46a4b07c1bda..14a3ee9ce12a19f0274357e525efb27f758b3c97 100644
--- a/doc/TRAIN_TO_SERVICE_CN.md
+++ b/doc/TRAIN_TO_SERVICE_CN.md
@@ -6,10 +6,12 @@ Paddle Serving是Paddle的高性能在线预测服务框架,可以灵活支持
## Step1:准备环境
+
Paddle Serving可以部署在Linux环境上,目前server端支持在Centos7上部署,推荐使用[Docker部署](RUN_IN_DOCKER_CN.md)。rpc client端可以在Centos7和Ubuntu18上部署,在其他系统上或者不希望安装serving模块的环境中仍然可以通过http服务来访问server端的预测服务。
可以根据需求和机器环境来选择安装cpu或gpu版本的server模块,在client端机器上安装client模块。使用http请求的方式来访问server时,client端机器不需要安装client模块。
+
```shell
pip install paddle_serving_server #cpu版本server端
pip install paddle_serving_server_gpu #gpu版本server端
diff --git a/doc/CLIENT_CONFIGURE.md b/doc/deprecated/CLIENT_CONFIGURE.md
similarity index 100%
rename from doc/CLIENT_CONFIGURE.md
rename to doc/deprecated/CLIENT_CONFIGURE.md
diff --git a/doc/CLUSTERING.md b/doc/deprecated/CLUSTERING.md
similarity index 100%
rename from doc/CLUSTERING.md
rename to doc/deprecated/CLUSTERING.md
diff --git a/doc/CREATING.md b/doc/deprecated/CREATING.md
similarity index 100%
rename from doc/CREATING.md
rename to doc/deprecated/CREATING.md
diff --git a/doc/CTR_PREDICTION.md b/doc/deprecated/CTR_PREDICTION.md
similarity index 100%
rename from doc/CTR_PREDICTION.md
rename to doc/deprecated/CTR_PREDICTION.md
diff --git a/doc/FAQ.md b/doc/deprecated/FAQ.md
similarity index 100%
rename from doc/FAQ.md
rename to doc/deprecated/FAQ.md
diff --git a/doc/GETTING_STARTED.md b/doc/deprecated/GETTING_STARTED.md
similarity index 100%
rename from doc/GETTING_STARTED.md
rename to doc/deprecated/GETTING_STARTED.md
diff --git a/doc/HTTP_INTERFACE.md b/doc/deprecated/HTTP_INTERFACE.md
similarity index 100%
rename from doc/HTTP_INTERFACE.md
rename to doc/deprecated/HTTP_INTERFACE.md
diff --git a/doc/INDEX.md b/doc/deprecated/INDEX.md
similarity index 100%
rename from doc/INDEX.md
rename to doc/deprecated/INDEX.md
diff --git a/doc/MULTI_SERVING_OVER_SINGLE_GPU_CARD.md b/doc/deprecated/MULTI_SERVING_OVER_SINGLE_GPU_CARD.md
similarity index 100%
rename from doc/MULTI_SERVING_OVER_SINGLE_GPU_CARD.md
rename to doc/deprecated/MULTI_SERVING_OVER_SINGLE_GPU_CARD.md
diff --git a/doc/SERVING_CONFIGURE.md b/doc/deprecated/SERVING_CONFIGURE.md
similarity index 100%
rename from doc/SERVING_CONFIGURE.md
rename to doc/deprecated/SERVING_CONFIGURE.md
diff --git a/doc/doc_test_list b/doc/doc_test_list
new file mode 100644
index 0000000000000000000000000000000000000000..ef019de05d6075801434bae91de8cbdceb1fea91
--- /dev/null
+++ b/doc/doc_test_list
@@ -0,0 +1 @@
+BERT_10_MINS.md
diff --git a/tools/doc_test.sh b/tools/doc_test.sh
new file mode 100644
index 0000000000000000000000000000000000000000..c76656bc691fbd4c191f234c9cec13f6bb272527
--- /dev/null
+++ b/tools/doc_test.sh
@@ -0,0 +1,10 @@
+#!/usr/bin/env bash
+
+
+function main() {
+ cat Serving/doc/doc_test_list | xargs python Serving/tools/doc_tester_reader.py Serving/doc/
+
+}
+
+
+main $@
diff --git a/tools/doc_tester_reader.py b/tools/doc_tester_reader.py
new file mode 100644
index 0000000000000000000000000000000000000000..b981e9f2e8d98f31743174c4121fda4baa9a1d63
--- /dev/null
+++ b/tools/doc_tester_reader.py
@@ -0,0 +1,70 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import re
+import sys
+
+
+def ReadMarkDown(file):
+ folder = 'test'
+ os.system('rm -rf ' + folder + ' && mkdir -p ' + folder)
+ with open(file, 'r') as f:
+ lines = f.readlines()
+ for i, line in enumerate(lines):
+ if '[//file]:#' in line:
+ filename = line[10:].strip()
+ GetCodeFile(lines, i, os.path.join(folder, filename))
+ if '' in lines[i]:
+ break
+ code += lines[i]
+ i += 1
+ with open(filename, 'w+') as f:
+ f.write(code)
+
+
+def RunTest():
+ folder = 'test'
+ os.system('cd ' + folder + ' && sh start.sh')
+ os.system('cd .. && rm -rf ' + folder)
+
+
+if __name__ == '__main__':
+ ReadMarkDown(os.path.join(sys.argv[1], sys.argv[2]))
+ RunTest()