diff --git a/doc/PIPELINE_SERVING.md b/doc/PIPELINE_SERVING.md index 7bfd2f51e2243583080d79d6d5ead249163dd629..aeda8d04f21f0cbc6fdf6c0fec9c3b163e57435d 100644 --- a/doc/PIPELINE_SERVING.md +++ b/doc/PIPELINE_SERVING.md @@ -726,6 +726,42 @@ There are two kinds of IDs in the pipeline for concatenating requests, `data_id` The log printed by the Pipeline framework will carry both data_id and log_id. After auto-batching is turned on, the first `data_id` in the batch will be used to mark the whole batch, and the framework will print all data_ids in the batch in a log. + +### 5.2 Log Rotating +Log module of Pipeline Serving is defined in file `logger.py`.`logging.handlers.RotatingFileHandler` is used to support the rotation of disk log files. Set `maxBytes` and `backupCount` according to different file levels and daily quality. When the predetermined size is about to be exceeded, the old file will be closed and a new file will be opened for output. + + +```python +"handlers": { + "f_pipeline.log": { + "class": "logging.handlers.RotatingFileHandler", + "level": "INFO", + "formatter": "normal_fmt", + "filename": os.path.join(log_dir, "pipeline.log"), + "maxBytes": 512000000, + "backupCount": 20, + }, + "f_pipeline.log.wf": { + "class": "logging.handlers.RotatingFileHandler", + "level": "WARNING", + "formatter": "normal_fmt", + "filename": os.path.join(log_dir, "pipeline.log.wf"), + "maxBytes": 512000000, + "backupCount": 10, + }, + "f_tracer.log": { + "class": "logging.handlers.RotatingFileHandler", + "level": "INFO", + "formatter": "tracer_fmt", + "filename": os.path.join(log_dir, "pipeline.tracer"), + "maxBytes": 512000000, + "backupCount": 5, + }, +}, +``` + +*** + ## 6.Performance analysis and optimization diff --git a/doc/PIPELINE_SERVING_CN.md b/doc/PIPELINE_SERVING_CN.md index da7d7d2c9ebaf7049a23814af6b0c1ca4b7ce602..340b66a22a658efd50a8fc92a481b4febcc0c698 100644 --- a/doc/PIPELINE_SERVING_CN.md +++ b/doc/PIPELINE_SERVING_CN.md @@ -705,9 +705,9 @@ Pipeline Serving支持低精度推理,CPU、GPU和TensoRT支持的精度类型 ## 5.日志追踪 Pipeline服务日志在当前目录的PipelineServingLogs目录下,有3种类型日志,分别是pipeline.log日志、pipeline.log.wf日志、pipeline.tracer日志。 -- pipeline.log日志 : 记录 debug & info日志信息 -- pipeline.log.wf日志 : 记录 warning & error日志 -- pipeline.tracer日志 : 统计各个阶段耗时、channel堆积信息 +- `pipeline.log` : 记录 debug & info日志信息 +- `pipeline.log.wf` : 记录 warning & error日志 +- `pipeline.tracer` : 统计各个阶段耗时、channel堆积信息 在服务发生异常时,错误信息会记录在pipeline.log.wf日志中。打印tracer日志要求在config.yml的DAG属性中添加tracer配置。 @@ -718,6 +718,39 @@ Pipeline中有2种id用以串联请求,分别时data_id和log_id,二者区 通常,Pipeline框架打印的日志会同时带上data_id和log_id。开启auto-batching后,会使用批量中的第一个data_id标记batch整体,同时框架会在一条日志中打印批量中所有data_id。 +### 5.2 日志滚动 +Pipeline的日志模块在`logger.py`中定义,使用了`logging.handlers.RotatingFileHandler`支持磁盘日志文件的轮换。根据不同文件级别和日质量分别设置了`maxBytes` 和 `backupCount`,当即将超出预定大小时,将关闭旧文件并打开一个新文件用于输出。 + +```python +"handlers": { + "f_pipeline.log": { + "class": "logging.handlers.RotatingFileHandler", + "level": "INFO", + "formatter": "normal_fmt", + "filename": os.path.join(log_dir, "pipeline.log"), + "maxBytes": 512000000, + "backupCount": 20, + }, + "f_pipeline.log.wf": { + "class": "logging.handlers.RotatingFileHandler", + "level": "WARNING", + "formatter": "normal_fmt", + "filename": os.path.join(log_dir, "pipeline.log.wf"), + "maxBytes": 512000000, + "backupCount": 10, + }, + "f_tracer.log": { + "class": "logging.handlers.RotatingFileHandler", + "level": "INFO", + "formatter": "tracer_fmt", + "filename": os.path.join(log_dir, "pipeline.tracer"), + "maxBytes": 512000000, + "backupCount": 5, + }, +}, +``` + +*** ## 6.性能分析与优化 diff --git a/python/examples/pipeline/bert/README.md b/python/examples/pipeline/bert/README.md new file mode 100644 index 0000000000000000000000000000000000000000..6074aa2b80dbe96c69726b7b8049e28db853445a --- /dev/null +++ b/python/examples/pipeline/bert/README.md @@ -0,0 +1,19 @@ +# Imagenet Pipeline WebService + +This document will takes Imagenet service as an example to introduce how to use Pipeline WebService. + +## Get model +``` +sh get_model.sh +``` + +## Start server + +``` +python3 web_service.py &>log.txt & +``` + +## RPC test +``` +python3 pipeline_rpc_client.py +``` diff --git a/python/examples/pipeline/bert/README_CN.md b/python/examples/pipeline/bert/README_CN.md new file mode 100644 index 0000000000000000000000000000000000000000..ace7b76fe717c8a0922bf41aa5615b3b5da945a1 --- /dev/null +++ b/python/examples/pipeline/bert/README_CN.md @@ -0,0 +1,19 @@ +# Imagenet Pipeline WebService + +这里以 Imagenet 服务为例来介绍 Pipeline WebService 的使用。 + +## 获取模型 +``` +sh get_model.sh +``` + +## 启动服务 + +``` +python3 web_service.py &>log.txt & +``` + +## 测试 +``` +python3 pipeline_rpc_client.py +``` diff --git a/python/examples/pipeline/imagenet/README.md b/python/examples/pipeline/imagenet/README.md index d0fa99e6d72f10d3d2b5907285528b68685128e0..6fbe0c4cf3a635670341d5aee4cee8bcbdc59a88 100644 --- a/python/examples/pipeline/imagenet/README.md +++ b/python/examples/pipeline/imagenet/README.md @@ -10,10 +10,10 @@ sh get_model.sh ## Start server ``` -python resnet50_web_service.py &>log.txt & +python3 resnet50_web_service.py &>log.txt & ``` ## RPC test ``` -python pipeline_rpc_client.py +python3 pipeline_rpc_client.py ``` diff --git a/python/examples/pipeline/imagenet/README_CN.md b/python/examples/pipeline/imagenet/README_CN.md index 335c96b2144b17e20d6007f376dec4416fb10aa5..c204c3c662825ed26001cf6d444d94f0bab508f7 100644 --- a/python/examples/pipeline/imagenet/README_CN.md +++ b/python/examples/pipeline/imagenet/README_CN.md @@ -10,11 +10,10 @@ sh get_model.sh ## 启动服务 ``` -python resnet50_web_service.py &>log.txt & +python3 resnet50_web_service.py &>log.txt & ``` ## 测试 ``` -python pipeline_rpc_client.py +python3 pipeline_rpc_client.py ``` - diff --git a/python/examples/pipeline/imdb_model_ensemble/README.md b/python/examples/pipeline/imdb_model_ensemble/README.md index f90024f6af04b7d50fe90eb6e04b71dd703300e3..c72eab665619dd29cfe467cb36ad6b1ff31f2259 100644 --- a/python/examples/pipeline/imdb_model_ensemble/README.md +++ b/python/examples/pipeline/imdb_model_ensemble/README.md @@ -8,12 +8,12 @@ sh get_data.sh ## Start servers ``` -python -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & -python -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & -python test_pipeline_server.py &>pipeline.log & +python3 -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & +python3 -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & +python3 test_pipeline_server.py &>pipeline.log & ``` ## Start clients ``` -python test_pipeline_client.py +python3 test_pipeline_client.py ``` diff --git a/python/examples/pipeline/imdb_model_ensemble/README_CN.md b/python/examples/pipeline/imdb_model_ensemble/README_CN.md index fd4785292c3bfa731f76666b7d4e12e4e285fbda..79ed5c0c89bf702860c0e64cfc7bf9efdb2dd76b 100644 --- a/python/examples/pipeline/imdb_model_ensemble/README_CN.md +++ b/python/examples/pipeline/imdb_model_ensemble/README_CN.md @@ -8,12 +8,12 @@ sh get_data.sh ## 启动服务 ``` -python -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & -python -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & -python test_pipeline_server.py &>pipeline.log & +python3 -m paddle_serving_server.serve --model imdb_cnn_model --port 9292 &> cnn.log & +python3 -m paddle_serving_server.serve --model imdb_bow_model --port 9393 &> bow.log & +python3 test_pipeline_server.py &>pipeline.log & ``` ## 启动客户端 ``` -python test_pipeline_client.py +python3 test_pipeline_client.py ``` diff --git a/python/examples/pipeline/ocr/README.md b/python/examples/pipeline/ocr/README.md index de7bcaa2ece7f9fa7ba56de533e8e4dd023ad1f3..4c669b3cc19e6afb7a0c5178a7b251bd4e47ff31 100644 --- a/python/examples/pipeline/ocr/README.md +++ b/python/examples/pipeline/ocr/README.md @@ -4,11 +4,13 @@ This document will take OCR as an example to show how to use Pipeline WebService to start multi-model tandem services. +This OCR example only supports Process OP. + ## Get Model ``` -python -m paddle_serving_app.package --get_model ocr_rec +python3 -m paddle_serving_app.package --get_model ocr_rec tar -xzvf ocr_rec.tar.gz -python -m paddle_serving_app.package --get_model ocr_det +python3 -m paddle_serving_app.package --get_model ocr_det tar -xzvf ocr_det.tar.gz ``` @@ -18,14 +20,16 @@ wget --no-check-certificate https://paddle-serving.bj.bcebos.com/ocr/test_imgs.t tar xf test_imgs.tar ``` -## Start Service +## Run services + +### 1.Start a single server and client. ``` -python web_service.py &>log.txt & +python3 web_service.py &>log.txt & ``` -## Test +Test ``` -python pipeline_http_client.py +python3 pipeline_http_client.py ``` + + +### 2.Run benchmark +``` +python3 web_service.py &>log.txt & +``` + +Test +``` +sh benchmark.sh +``` diff --git a/python/examples/pipeline/ocr/README_CN.md b/python/examples/pipeline/ocr/README_CN.md index c7058e026d45a3971a9064a5b34078c63fe5d5de..c6c2064a684c175b816887cf65e19e7a90f927fa 100644 --- a/python/examples/pipeline/ocr/README_CN.md +++ b/python/examples/pipeline/ocr/README_CN.md @@ -3,12 +3,13 @@ ([English](./README.md)|简体中文) 本文档将以 OCR 为例,介绍如何使用 Pipeline WebService 启动多模型串联的服务。 +本示例仅支持进程OP模式。 ## 获取模型 ``` -python -m paddle_serving_app.package --get_model ocr_rec +python3 -m paddle_serving_app.package --get_model ocr_rec tar -xzvf ocr_rec.tar.gz -python -m paddle_serving_app.package --get_model ocr_det +python3 -m paddle_serving_app.package --get_model ocr_det tar -xzvf ocr_det.tar.gz ``` @@ -19,13 +20,15 @@ tar xf test_imgs.tar ``` ## 启动 WebService + +### 1.启动单server、单client ``` -python web_service.py &>log.txt & +python3 web_service.py &>log.txt & ``` ## 测试 ``` -python pipeline_http_client.py +python3 pipeline_http_client.py ``` + +### 2.启动 benchmark +``` +python3 web_service.py &>log.txt & +``` + +Test +``` +sh benchmark.sh +``` diff --git a/python/pipeline/logger.py b/python/pipeline/logger.py index b566c012d3ced8f4f1bddd9b1622abc4beb9c8a5..ebf31564bb362f11ab16c2a5656f50b38ffacd75 100644 --- a/python/pipeline/logger.py +++ b/python/pipeline/logger.py @@ -42,22 +42,28 @@ logger_config = { }, "handlers": { "f_pipeline.log": { - "class": "logging.FileHandler", + "class": "logging.handlers.RotatingFileHandler", "level": "INFO", "formatter": "normal_fmt", "filename": os.path.join(log_dir, "pipeline.log"), + "maxBytes": 512000000, + "backupCount": 20, }, "f_pipeline.log.wf": { - "class": "logging.FileHandler", + "class": "logging.handlers.RotatingFileHandler", "level": "WARNING", "formatter": "normal_fmt", "filename": os.path.join(log_dir, "pipeline.log.wf"), + "maxBytes": 512000000, + "backupCount": 10, }, "f_tracer.log": { - "class": "logging.FileHandler", + "class": "logging.handlers.RotatingFileHandler", "level": "INFO", "formatter": "tracer_fmt", "filename": os.path.join(log_dir, "pipeline.tracer"), + "maxBytes": 512000000, + "backupCount": 5, }, }, "loggers": {