未验证 提交 2fdc0888 编写于 作者: T TeslaZhao 提交者: GitHub

Merge pull request #1239 from TeslaZhao/v0.6.0

cherry-pick #1238
...@@ -726,6 +726,42 @@ There are two kinds of IDs in the pipeline for concatenating requests, `data_id` ...@@ -726,6 +726,42 @@ There are two kinds of IDs in the pipeline for concatenating requests, `data_id`
The log printed by the Pipeline framework will carry both data_id and log_id. After auto-batching is turned on, the first `data_id` in the batch will be used to mark the whole batch, and the framework will print all data_ids in the batch in a log. The log printed by the Pipeline framework will carry both data_id and log_id. After auto-batching is turned on, the first `data_id` in the batch will be used to mark the whole batch, and the framework will print all data_ids in the batch in a log.
### 5.2 Log Rotating
Log module of Pipeline Serving is defined in file `logger.py`.`logging.handlers.RotatingFileHandler` is used to support the rotation of disk log files. Set `maxBytes` and `backupCount` according to different file levels and daily quality. When the predetermined size is about to be exceeded, the old file will be closed and a new file will be opened for output.
```python
"handlers": {
"f_pipeline.log": {
"class": "logging.handlers.RotatingFileHandler",
"level": "INFO",
"formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log"),
"maxBytes": 512000000,
"backupCount": 20,
},
"f_pipeline.log.wf": {
"class": "logging.handlers.RotatingFileHandler",
"level": "WARNING",
"formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log.wf"),
"maxBytes": 512000000,
"backupCount": 10,
},
"f_tracer.log": {
"class": "logging.handlers.RotatingFileHandler",
"level": "INFO",
"formatter": "tracer_fmt",
"filename": os.path.join(log_dir, "pipeline.tracer"),
"maxBytes": 512000000,
"backupCount": 5,
},
},
```
***
## 6.Performance analysis and optimization ## 6.Performance analysis and optimization
......
...@@ -705,9 +705,9 @@ Pipeline Serving支持低精度推理,CPU、GPU和TensoRT支持的精度类型 ...@@ -705,9 +705,9 @@ Pipeline Serving支持低精度推理,CPU、GPU和TensoRT支持的精度类型
## 5.日志追踪 ## 5.日志追踪
Pipeline服务日志在当前目录的PipelineServingLogs目录下,有3种类型日志,分别是pipeline.log日志、pipeline.log.wf日志、pipeline.tracer日志。 Pipeline服务日志在当前目录的PipelineServingLogs目录下,有3种类型日志,分别是pipeline.log日志、pipeline.log.wf日志、pipeline.tracer日志。
- pipeline.log日志 : 记录 debug & info日志信息 - `pipeline.log` : 记录 debug & info日志信息
- pipeline.log.wf日志 : 记录 warning & error日志 - `pipeline.log.wf` : 记录 warning & error日志
- pipeline.tracer日志 : 统计各个阶段耗时、channel堆积信息 - `pipeline.tracer` : 统计各个阶段耗时、channel堆积信息
在服务发生异常时,错误信息会记录在pipeline.log.wf日志中。打印tracer日志要求在config.yml的DAG属性中添加tracer配置。 在服务发生异常时,错误信息会记录在pipeline.log.wf日志中。打印tracer日志要求在config.yml的DAG属性中添加tracer配置。
...@@ -718,6 +718,39 @@ Pipeline中有2种id用以串联请求,分别时data_id和log_id,二者区 ...@@ -718,6 +718,39 @@ Pipeline中有2种id用以串联请求,分别时data_id和log_id,二者区
通常,Pipeline框架打印的日志会同时带上data_id和log_id。开启auto-batching后,会使用批量中的第一个data_id标记batch整体,同时框架会在一条日志中打印批量中所有data_id。 通常,Pipeline框架打印的日志会同时带上data_id和log_id。开启auto-batching后,会使用批量中的第一个data_id标记batch整体,同时框架会在一条日志中打印批量中所有data_id。
### 5.2 日志滚动
Pipeline的日志模块在`logger.py`中定义,使用了`logging.handlers.RotatingFileHandler`支持磁盘日志文件的轮换。根据不同文件级别和日质量分别设置了`maxBytes``backupCount`,当即将超出预定大小时,将关闭旧文件并打开一个新文件用于输出。
```python
"handlers": {
"f_pipeline.log": {
"class": "logging.handlers.RotatingFileHandler",
"level": "INFO",
"formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log"),
"maxBytes": 512000000,
"backupCount": 20,
},
"f_pipeline.log.wf": {
"class": "logging.handlers.RotatingFileHandler",
"level": "WARNING",
"formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log.wf"),
"maxBytes": 512000000,
"backupCount": 10,
},
"f_tracer.log": {
"class": "logging.handlers.RotatingFileHandler",
"level": "INFO",
"formatter": "tracer_fmt",
"filename": os.path.join(log_dir, "pipeline.tracer"),
"maxBytes": 512000000,
"backupCount": 5,
},
},
```
***
## 6.性能分析与优化 ## 6.性能分析与优化
......
# Imagenet Pipeline WebService
This document will takes Imagenet service as an example to introduce how to use Pipeline WebService.
## Get model
```
sh get_model.sh
```
## Start server
```
python3 web_service.py &>log.txt &
```
## RPC test
```
python3 pipeline_rpc_client.py
```
# Imagenet Pipeline WebService
这里以 Imagenet 服务为例来介绍 Pipeline WebService 的使用。
## 获取模型
```
sh get_model.sh
```
## 启动服务
```
python3 web_service.py &>log.txt &
```
## 测试
```
python3 pipeline_rpc_client.py
```
...@@ -10,10 +10,10 @@ sh get_model.sh ...@@ -10,10 +10,10 @@ sh get_model.sh
## Start server ## Start server
``` ```
python resnet50_web_service.py &>log.txt & python3 resnet50_web_service.py &>log.txt &
``` ```
## RPC test ## RPC test
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
...@@ -10,11 +10,10 @@ sh get_model.sh ...@@ -10,11 +10,10 @@ sh get_model.sh
## 启动服务 ## 启动服务
``` ```
python resnet50_web_service.py &>log.txt & python3 resnet50_web_service.py &>log.txt &
``` ```
## 测试 ## 测试
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
...@@ -8,12 +8,12 @@ sh get_data.sh ...@@ -8,12 +8,12 @@ sh get_data.sh
## Start servers ## Start servers
``` ```
python -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & python3 -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log &
python -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & python3 -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log &
python test_pipeline_server.py &>pipeline.log & python3 test_pipeline_server.py &>pipeline.log &
``` ```
## Start clients ## Start clients
``` ```
python test_pipeline_client.py python3 test_pipeline_client.py
``` ```
...@@ -8,12 +8,12 @@ sh get_data.sh ...@@ -8,12 +8,12 @@ sh get_data.sh
## 启动服务 ## 启动服务
``` ```
python -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & python3 -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log &
python -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & python3 -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log &
python test_pipeline_server.py &>pipeline.log & python3 test_pipeline_server.py &>pipeline.log &
``` ```
## 启动客户端 ## 启动客户端
``` ```
python test_pipeline_client.py python3 test_pipeline_client.py
``` ```
...@@ -4,11 +4,13 @@ ...@@ -4,11 +4,13 @@
This document will take OCR as an example to show how to use Pipeline WebService to start multi-model tandem services. This document will take OCR as an example to show how to use Pipeline WebService to start multi-model tandem services.
This OCR example only supports Process OP.
## Get Model ## Get Model
``` ```
python -m paddle_serving_app.package --get_model ocr_rec python3 -m paddle_serving_app.package --get_model ocr_rec
tar -xzvf ocr_rec.tar.gz tar -xzvf ocr_rec.tar.gz
python -m paddle_serving_app.package --get_model ocr_det python3 -m paddle_serving_app.package --get_model ocr_det
tar -xzvf ocr_det.tar.gz tar -xzvf ocr_det.tar.gz
``` ```
...@@ -18,14 +20,16 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/ocr/test_imgs.t ...@@ -18,14 +20,16 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/ocr/test_imgs.t
tar xf test_imgs.tar tar xf test_imgs.tar
``` ```
## Start Service ## Run services
### 1.Start a single server and client.
``` ```
python web_service.py &>log.txt & python3 web_service.py &>log.txt &
``` ```
## Test Test
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
<!-- <!--
...@@ -35,11 +39,22 @@ python pipeline_http_client.py ...@@ -35,11 +39,22 @@ python pipeline_http_client.py
### RPC ### RPC
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
### HTTP ### HTTP
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
--> -->
### 2.Run benchmark
```
python3 web_service.py &>log.txt &
```
Test
```
sh benchmark.sh
```
...@@ -3,12 +3,13 @@ ...@@ -3,12 +3,13 @@
([English](./README.md)|简体中文) ([English](./README.md)|简体中文)
本文档将以 OCR 为例,介绍如何使用 Pipeline WebService 启动多模型串联的服务。 本文档将以 OCR 为例,介绍如何使用 Pipeline WebService 启动多模型串联的服务。
本示例仅支持进程OP模式。
## 获取模型 ## 获取模型
``` ```
python -m paddle_serving_app.package --get_model ocr_rec python3 -m paddle_serving_app.package --get_model ocr_rec
tar -xzvf ocr_rec.tar.gz tar -xzvf ocr_rec.tar.gz
python -m paddle_serving_app.package --get_model ocr_det python3 -m paddle_serving_app.package --get_model ocr_det
tar -xzvf ocr_det.tar.gz tar -xzvf ocr_det.tar.gz
``` ```
...@@ -19,13 +20,15 @@ tar xf test_imgs.tar ...@@ -19,13 +20,15 @@ tar xf test_imgs.tar
``` ```
## 启动 WebService ## 启动 WebService
### 1.启动单server、单client
``` ```
python web_service.py &>log.txt & python3 web_service.py &>log.txt &
``` ```
## 测试 ## 测试
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
<!-- <!--
...@@ -36,12 +39,22 @@ python pipeline_http_client.py ...@@ -36,12 +39,22 @@ python pipeline_http_client.py
### RPC ### RPC
``` ```
python pipeline_rpc_client.py python3 pipeline_rpc_client.py
``` ```
### HTTP ### HTTP
``` ```
python pipeline_http_client.py python3 pipeline_http_client.py
``` ```
--> -->
### 2.启动 benchmark
```
python3 web_service.py &>log.txt &
```
Test
```
sh benchmark.sh
```
...@@ -42,22 +42,28 @@ logger_config = { ...@@ -42,22 +42,28 @@ logger_config = {
}, },
"handlers": { "handlers": {
"f_pipeline.log": { "f_pipeline.log": {
"class": "logging.FileHandler", "class": "logging.handlers.RotatingFileHandler",
"level": "INFO", "level": "INFO",
"formatter": "normal_fmt", "formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log"), "filename": os.path.join(log_dir, "pipeline.log"),
"maxBytes": 512000000,
"backupCount": 20,
}, },
"f_pipeline.log.wf": { "f_pipeline.log.wf": {
"class": "logging.FileHandler", "class": "logging.handlers.RotatingFileHandler",
"level": "WARNING", "level": "WARNING",
"formatter": "normal_fmt", "formatter": "normal_fmt",
"filename": os.path.join(log_dir, "pipeline.log.wf"), "filename": os.path.join(log_dir, "pipeline.log.wf"),
"maxBytes": 512000000,
"backupCount": 10,
}, },
"f_tracer.log": { "f_tracer.log": {
"class": "logging.FileHandler", "class": "logging.handlers.RotatingFileHandler",
"level": "INFO", "level": "INFO",
"formatter": "tracer_fmt", "formatter": "tracer_fmt",
"filename": os.path.join(log_dir, "pipeline.tracer"), "filename": os.path.join(log_dir, "pipeline.tracer"),
"maxBytes": 512000000,
"backupCount": 5,
}, },
}, },
"loggers": { "loggers": {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册