提交 55fc1bda 编写于 作者: M MRXLT

add English readme for app

上级 09420b59
...@@ -32,3 +32,138 @@ The model files can be directly used for deployment, and the `--tutorial` argume ...@@ -32,3 +32,138 @@ The model files can be directly used for deployment, and the `--tutorial` argume
| ObjectDetection | 'faster_rcnn', 'yolov3' | | ObjectDetection | 'faster_rcnn', 'yolov3' |
| ImageSegmentation | 'unet', 'deeplabv3' | | ImageSegmentation | 'unet', 'deeplabv3' |
| ImageClassification | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' | | ImageClassification | 'resnet_v2_50_imagenet', 'mobilenet_v2_imagenet' |
## Data preprocess API
paddle_serving_app provides a variety of data preprocessing methods for prediction tasks in the field of CV and NLP.
- class ChineseBertReader
Preprocessing for Chinese semantic representation task.
- `__init__(vocab_file, max_seq_len=20)`
- vocab_file(st ):Path of dictionary file.
- max_seq_len(in ,optional):The length of sample after processing. The excess part will be truncated, and the insufficient part will be padding 0. Default 20.
- `process(line)`
- line(st ):Text input.
[example](../examples/bert/bert_client.py)
- class LACReader
Preprocessing for Chinese word segmentation task.
- `__init__(dict_floder)`
- dict_floder(st )Path of dictionary file.
- `process(sent)`
- sent(st ):Text input.
- `parse_result`
- words(st ):Original text input.
- crf_decode(np.array):CRF code predicted by model.
[example](../examples/bert/lac_web_service.py)
- class SentaReader
- `__init__(vocab_path)`
- vocab_path(st ):Path of dictionary file.
- `process(cols)`
- cols(st ):Word segmentation result.
[example](../examples/senta/senta_web_service.py)
- The image preprocessing method is more flexible than the above method, and can be combined by the following multiple classes,[example](../examples/imagenet/image_rpc_client.py)
- class Sequentia
- `__init__(transforms)`
- transforms(list):List of image preprocessing classes
- `__call__(img)`
- img:The input of image preprocessing. The data type is is related to the first preprocessing method in transforms.
- class File2Image
- `__call__(img_path)`
- img_path(str):Path of image file.
- class URL2Image
- `__call__(img_url)`
- img_url(str):url of image file.
- class Normalize
- `__init__(mean,std)`
- mean(float):Mean
- std(float):Variance
- `__call__(img)`
- img(np.array):Image data in (C,H,W) channels.
- class CenterCrop
- `__init__(size)`
- size(list/int):
- `__call__(img)`
- img(np.array):Image data.
- class Resize
- `__init__(size, max_size=2147483647, interpolation=None)`
- size(list/int):The expected image size, when the input is a list type, it needs to contain the expected length and width. When the input is int type, the short side will be set to the length of size, and the long side will be scaled proportionally.
- `__call__(img)`
- img(numpy array):Image data.
## Timeline tools
The Timeline tool can be used to visualize the start and end time of various stages such as the preparation data of the prediction service, client wait and server op.
This tool is convenient to analyze the proportion of time occupancy in the prediction service. On this basis, prediction services can be optimized in a targeted manner.
### How to use
1. Before making predictions on the client side, turn on the timeline function of each stage in the Paddle Serving framework by environment variables. It will print timeline information in log.
```shell
export FLAGS_profile_client=1 # Turn on timeline function of client
export FLAGS_profile_server=1 # Turn on timeline function of server
```
2. Perform predictions and redirect client-side logs to files, for example, named as profile.
3. Export the information in the log file into a trace file.
```shell
python -m paddle_serving_app.utils.log_to_trace --profile_file profile --trace_file trace
```
4. Open the `chrome: // tracing /` URL using Chrome browser.
Load the trace file generated in the previous step through the load button, you can
Visualize the time information of each stage of the forecast service.
As shown in next figure, the figure shows the timeline of GPU prediction service using [bert example] (https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert).
The server side starts service with 4 GPU cards, the client side starts 4 processes to request, and the batch size is 1.
In the figure, bert_pre represents the data pre-processing stage of the client, and client_infer represents the stage where the client completes the sending of the prediction request to the receiving result.
The process in the figure represents the process number of the client, and the second line of each process shows the timeline of each op of the server.
![timeline](../../doc/timeline-example.png)
## Debug tools
The inference op of Paddle Serving is implemented based on Paddle inference lib.
Before deploying the prediction service, you may need to check the input and output of the prediction service or check the resource consumption.
Therefore, a local prediction tool is built into the paddle_serving_app, which is used in the same way as sending a request to the server through the client.
Taking [fit_a_line prediction service] (../examples/fit_a_line) as an example, the following code can be used to run local prediction.
```python
from paddle_serving_app import Debugger
import numpy as np
debugger = Debugger()
debugger.load_model_config("./uci_housing_model", gpu=False)
data = [0.0137, -0.1136, 0.2553, -0.0692, 0.0582, -0.0727,
-0.1583, -0.0584, 0.6283, 0.4919, 0.1856, 0.0795, -0.0332]
fetch_map = debugger.predict(feed={"x":data}, fetch = ["price"])
```
...@@ -33,9 +33,11 @@ paddle_serving_app中内置了11中预训练模型,涵盖了6种预测任务 ...@@ -33,9 +33,11 @@ paddle_serving_app中内置了11中预训练模型,涵盖了6种预测任务
## 数据预处理API ## 数据预处理API
paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的数据预处理方法 paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的数据预处理方法
- class ChineseBertReader 中文语义理解模型预处理 - class ChineseBertReader
中文语义理解模型预处理
- `__init__(vocab_file, max_seq_len=20)` - `__init__(vocab_file, max_seq_len=20)`
...@@ -58,7 +60,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的 ...@@ -58,7 +60,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的
- words(str):原始文本 - words(str):原始文本
- crf_decode(np.array):模型预测结果中的CRF编码 - crf_decode(np.array):模型预测结果中的CRF编码
[参考示例](../examples/bert/bert_client.py) [参考示例](../examples/lac/lac_web_service.py)
- class SentaReader - class SentaReader
...@@ -112,7 +114,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的 ...@@ -112,7 +114,7 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的
## Timeline 工具 ## Timeline 工具
通过Timeline工具可以将预测服务的准备数据、client等待、server端op等各阶段起止时间可视化,方便分析预测服务中的时间占用比重,在此基础上有针对性优化预测服务。 通过Timeline工具可以将预测服务的准备数据、client等待、server端op等各阶段起止时间可视化,方便分析预测服务中的时间占用比重,在此基础上有针对性优化预测服务。
### 使用方式 ### 使用方式
...@@ -133,13 +135,14 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的 ...@@ -133,13 +135,14 @@ paddle_serving_app针对CV和NLP领域的模型任务,提供了多种常见的
4. 使用chrome浏览器,打开`chrome://tracing/`网址,通过load按钮加载上一步产生的trace文件,即可将预测服务的各阶段时间信息可视化。 4. 使用chrome浏览器,打开`chrome://tracing/`网址,通过load按钮加载上一步产生的trace文件,即可将预测服务的各阶段时间信息可视化。
效果如下图,图中展示了使用[bert示例](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert)的GPU预测服务,server端开启4卡预测,client端启动4进程,batch size为1时的各阶段timeline,其中bert_pre代表client端的数据预处理阶段,client_infer代表client完成预测请求的发送和接收结果的阶段,图中的process代表的是client的进程号,每个进进程的第二行展示的是server各个op的timeline。 效果如下图,图中展示了使用[bert示例](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/bert)的GPU预测服务,server端开启4卡预测,client端启动4进程,batch size为1时的各阶段timeline。
其中bert_pre代表client端的数据预处理阶段,client_infer代表client完成预测请求的发送到接收结果的阶段,图中的process代表的是client的进程号,每个进程的第二行展示的是server各个op的timeline。
![timeline](../../doc/timeline-example.png) ![timeline](../../doc/timeline-example.png)
## Debug工具 ## Debug工具
Paddle Serving框架的server预测op使用了Paddle 的预测框架,在部署预测服务之前可能需要对预测服务的输入输出进行检验或者查看资源占用等。因此在paddle_serving_app中内置了本地预测工具,使用方式与通过client向预测服发送请求一致。 Paddle Serving框架的server预测op使用了Paddle 的预测框架,在部署预测服务之前可能需要对预测服务的输入输出进行检验或者查看资源占用等。因此在paddle_serving_app中内置了本地预测工具,使用方式与通过client向服务端发送请求一致。
[fit_a_line预测服务](../examples/fit_a_line)为例,使用以下代码即可执行本地预测。 [fit_a_line预测服务](../examples/fit_a_line)为例,使用以下代码即可执行本地预测。
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册