diff --git a/README.md b/README.md index 911b4a7c056f9f487f80f977a6ad3fe351828d04..4bc9328a55a43cac1e0dc8c299cb153cf5bbff79 100755 --- a/README.md +++ b/README.md @@ -56,8 +56,8 @@ This chapter guides you through the installation and deployment steps. It is str - [Install Paddle Serving using docker](doc/Install_EN.md) - [Build Paddle Serving from Source with Docker](doc/Compile_EN.md) -- [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_EN.md) -- [Deploy Paddle Serving with Security gateway](doc/Serving_Auth_Docker.md) +- [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_CN.md) +- [Deploy Paddle Serving with Security gateway](doc/Serving_Auth_Docker_CN.md) - [Deploy Paddle Serving on more hardwares](doc/Run_On_XPU_EN.md) - [Latest Wheel packages](doc/Latest_Packages_CN.md)(Update everyday on branch develop) @@ -68,10 +68,10 @@ The first step is to call the model save interface to generate a model parameter - [Quick Start](doc/Quick_Start_EN.md) - [Save a servable model](doc/Save_EN.md) - [Description of configuration and startup parameters](doc/Serving_Configure_EN.md) -- [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Http_Service_EN.md) +- [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Introduction_CN.md) - [Infer on quantizative models](doc/Low_Precision_CN.md) -- [Data format of classic models](doc/Process_Data_CN.md) -- [C++ Serving](doc/C++_Serving/Introduction_EN.md) +- [Data format of classic models](doc/Process_data_CN.md) +- [C++ Serving](doc/C++_Serving/Introduction_CN.md) - [protocols](doc/C++_Serving/Inference_Protocols_CN.md) - [Hot loading models](doc/C++_Serving/Hot_Loading_EN.md) - [A/B Test](doc/C++_Serving/ABTest_EN.md) @@ -101,20 +101,20 @@ For Paddle Serving developers, we provide extended documents such as custom OP, Paddle Serving works closely with the Paddle model suite, and implements a large number of service deployment examples, including image classification, object detection, language and text recognition, Chinese part of speech, sentiment analysis, content recommendation and other types of examples, for a total of 42 models. -
+

| PaddleOCR | PaddleDetection | PaddleClas | PaddleSeg | PaddleRec | Paddle NLP | | :----: | :----: | :----: | :----: | :----: | :----: | | 8 | 12 | 13 | 2 | 3 | 4 | -

+

For more model examples, read [Model zoo](doc/Model_Zoo_EN.md) -
- - -
+

+ + +

Community

@@ -122,10 +122,19 @@ For more model examples, read [Model zoo](doc/Model_Zoo_EN.md) If you want to communicate with developers and other users? Welcome to join us, join the community through the following methods below. ### Wechat -- 微信用户请扫码 +- WeChat scavenging + + +

+ +

### QQ -- 飞桨推理部署交流群(Group No.:696965088) +- 飞桨推理部署交流群(Group No.:697765514) + +

+ +

### Slack @@ -134,11 +143,12 @@ If you want to communicate with developers and other users? Welcome to join us, > Contribution If you want to contribute code to Paddle Serving, please reference [Contribution Guidelines](doc/Contribute_EN.md) - -- Special Thanks to [@BeyondYourself](https://github.com/BeyondYourself) in complementing the gRPC tutorial, updating the FAQ doc and modifying the mdkir command -- Special Thanks to [@mcl-stone](https://github.com/mcl-stone) in updating faster_rcnn benchmark -- Special Thanks to [@cg82616424](https://github.com/cg82616424) in updating the unet benchmark and modifying resize comment error -- Special Thanks to [@cuicheng01](https://github.com/cuicheng01) for providing 11 PaddleClas models +- Thanks to [@loveululu](https://github.com/loveululu) for providing python API of Cube. +- Thanks to [@EtachGu](https://github.com/EtachGu) in updating run docker codes. +- Thanks to [@BeyondYourself](https://github.com/BeyondYourself) in complementing the gRPC tutorial, updating the FAQ doc and modifying the mdkir command +- Thanks to [@mcl-stone](https://github.com/mcl-stone) in updating faster_rcnn benchmark +- Thanks to [@cg82616424](https://github.com/cg82616424) in updating the unet benchmark modifying resize comment error +- Thanks to [@cuicheng01](https://github.com/cuicheng01) for providing 11 PaddleClas models > Feedback diff --git a/README_CN.md b/README_CN.md index f80e62436b1faea245580a3a7f7b244ef60f195f..011be5b9cea7a47f54e8ed2e289061c95d86b4c3 100644 --- a/README_CN.md +++ b/README_CN.md @@ -55,8 +55,8 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发 此章节引导您完成安装和部署步骤,强烈推荐使用Docker部署Paddle Serving,如您不使用docker,省略docker相关步骤。在云服务器上可以使用Kubernetes部署Paddle Serving。在异构硬件如ARM CPU、昆仑XPU上编译或使用Paddle Serving可以下面的文档。每天编译生成develop分支的最新开发包供开发者使用。 - [使用docker安装Paddle Serving](doc/Install_CN.md) - [源码编译安装Paddle Serving](doc/Compile_CN.md) -- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes.md) -- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker.md) +- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes_CN.md) +- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker_CN.md) - [在异构硬件部署Paddle Serving](doc/Run_On_XPU_CN.md) - [最新Wheel开发包](doc/Latest_Packages_CN.md)(develop分支每日更新) @@ -64,9 +64,9 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发 安装Paddle Serving后,使用快速开始将引导您运行Serving。第一步,调用模型保存接口,生成模型参数配置文件(.prototxt)用以在客户端和服务端使用;第二步,阅读配置和启动参数并启动服务;第三步,根据API和您的使用场景,基于SDK编写客户端请求,并测试推理服务。您想了解跟多特性的使用场景和方法,请详细阅读以下文档。 - [快速开始](doc/Quick_Start_CN.md) -- [保存用于Paddle Serving的模型和配置](doc/SAVE_CN.md) +- [保存用于Paddle Serving的模型和配置](doc/Save_CN.md) - [配置和启动参数的说明](doc/Serving_Configure_CN.md) -- [RESTful/gRPC/bRPC API指南](doc/C++_Serving/Http_Service_CN.md) +- [RESTful/gRPC/bRPC API指南](doc/C++_Serving/Introduction_CN.md#4.Client端特性) - [低精度推理](doc/Low_Precision_CN.md) - [常见模型数据处理](doc/Process_data_CN.md) - [C++ Serving简介](doc/C++_Serving/Introduction_CN.md) @@ -95,21 +95,21 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发

模型库

Paddle Serving与Paddle模型套件紧密配合,实现大量服务化部署,包括图像分类、物体检测、语言文本识别、中文词性、情感分析、内容推荐等多种类型示例,以及Paddle全链条项目,共计42个模型。 -
+ +

| PaddleOCR | PaddleDetection | PaddleClas | PaddleSeg | PaddleRec | Paddle NLP | | :----: | :----: | :----: | :----: | :----: | :----: | | 8 | 12 | 13 | 2 | 3 | 4 | -

+

+ 更多模型示例参考Repo,可进入[模型库](doc/Model_Zoo_CN.md) -
- - - - -
+

+ + +

社区

@@ -119,8 +119,16 @@ Paddle Serving与Paddle模型套件紧密配合,实现大量服务化部署, ### 微信 - 微信用户请扫码 +

+ +

+ ### QQ -- 飞桨推理部署交流群(群号:696965088) +- 飞桨推理部署交流群(Group No.:697765514) + +

+ +

### Slack - [Slack channel](https://paddleserving.slack.com/archives/CUBPKHKMJ) @@ -128,12 +136,13 @@ Paddle Serving与Paddle模型套件紧密配合,实现大量服务化部署, > 贡献代码 -如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/Contribute.md) - -- 特别感谢 [@BeyondYourself](https://github.com/BeyondYourself) 提供grpc教程,更新FAQ教程,整理文件目录。 -- 特别感谢 [@mcl-stone](https://github.com/mcl-stone) 提供faster rcnn benchmark脚本 -- 特别感谢 [@cg82616424](https://github.com/cg82616424) 提供unet benchmark脚本和修改部分注释错误 -- 特别感谢 [@cuicheng01](https://github.com/cuicheng01) 提供PaddleClas的11个模型 +如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/Contribute_EN.md) +- 感谢 [@loveululu](https://github.com/loveululu) 提供 Cube python API +- 感谢 [@EtachGu](https://github.com/EtachGu) 更新 docker 使用命令 +- 感谢 [@BeyondYourself](https://github.com/BeyondYourself) 提供grpc教程,更新FAQ教程,整理文件目录。 +- 感谢 [@mcl-stone](https://github.com/mcl-stone) 提供faster rcnn benchmark脚本 +- 感谢 [@cg82616424](https://github.com/cg82616424) 提供unet benchmark脚本和修改部分注释错误 +- 感谢 [@cuicheng01](https://github.com/cuicheng01) 提供PaddleClas的11个模型 > 反馈 diff --git a/core/preprocess/hwvideoframe/README.md b/core/preprocess/hwvideoframe/README.md index 5d561c75fc6b891f9752a303630b3914dce7998b..0a7355a43ccef033c0fef993ba1cc50f5a4bd46a 100644 --- a/core/preprocess/hwvideoframe/README.md +++ b/core/preprocess/hwvideoframe/README.md @@ -54,7 +54,7 @@ Hwvideoframe provides a variety of data preprocessing methods for photo preproce ## Quick start -[After compiling from code](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md),this project will be stored in reader。 +[After compiling from code](../../../doc/Compile_EN.md),this project will be stored in reader。 ## How to Test diff --git a/doc/Compile_CN.md b/doc/Compile_CN.md index fca4627cc40f08227ce2628841dc6da9b3eddebd..aadf9fa5124c8876301c83fd043e686f78a1820c 100644 --- a/doc/Compile_CN.md +++ b/doc/Compile_CN.md @@ -23,7 +23,7 @@ | libSM | 1.2.2 | | libXrender | 0.9.10 | -推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境并配置好了上述编译依赖,详见[该文档](DOCKER_IMAGES_CN.md)。 +推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境并配置好了上述编译依赖,详见[该文档](Docker_Images_CN.md)。 ## 获取代码 @@ -158,7 +158,7 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \ make -j10 ``` -**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE_CN.md#注意事项)。 +**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](#注意事项)。 ## 编译Client部分 diff --git a/doc/Docker_Images_EN.md b/doc/Docker_Images_EN.md index b7f2a83eaecf284e4249e55330b0dd74199d6cd8..a495856afae6ead575390f5ea83345ea6a21bb48 100644 --- a/doc/Docker_Images_EN.md +++ b/doc/Docker_Images_EN.md @@ -78,6 +78,7 @@ Running Images: Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](./Run_On_Kubernetes_CN.md). + | ENV | Python Version | Tag | |------------------------------------------|----------------|-----------------------------| | cpu | 3.6 | 0.6.2-py36-runtime | diff --git a/doc/Latest_Packages_CN.md b/doc/Latest_Packages_CN.md index fbeb00d7c2160afa8d52f55abf1de2d2f24a830e..022ae75ab824ed8462f876f8d57b9097720cc18d 100644 --- a/doc/Latest_Packages_CN.md +++ b/doc/Latest_Packages_CN.md @@ -41,7 +41,7 @@ https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-n ``` ## Baidu Kunlun user -for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./DOCKER_IMAGES.md) +for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./Docker_Images_CN.md) **We only support Python 3.6 for Kunlun Users.** ### Wheel Package Links diff --git a/doc/Serving_Configure_CN.md b/doc/Serving_Configure_CN.md index 1f1420ced3cdc500f3e41b9bfdfc0792d0d9fcef..9b8fd1c5a5bdad3ffc73bf6ac2de655ec97e3de2 100644 --- a/doc/Serving_Configure_CN.md +++ b/doc/Serving_Configure_CN.md @@ -13,6 +13,7 @@ ## 模型配置文件 在开始介绍Server配置之前,先来介绍一下模型配置文件。我们在将模型转换为PaddleServing模型时,会生成对应的serving_client_conf.prototxt以及serving_server_conf.prototxt,两者内容一致,为模型输入输出的参数信息,方便用户拼装参数。该配置文件用于Server以及Client,并不需要用户自行修改。转换方法参考文档《[怎样保存用于Paddle Serving的模型](./Save_CN.md)》。protobuf格式可参考`core/configure/proto/general_model_config.proto`。 + 样例如下: ``` @@ -58,7 +59,7 @@ fetch_var { ## C++ Serving -### 1.快速启动与关闭 +### 1.快速启动 可以通过配置模型及端口号快速启动服务,启动命令如下: @@ -106,11 +107,6 @@ python3 -m paddle_serving_server.serve --model serving_model --thread 10 --port ```BASH python3 -m paddle_serving_server.serve --model serving_model_1 serving_model_2 --thread 10 --port 9292 ``` -#### 当您想要关闭Serving服务时. -```BASH -python3 -m paddle_serving_server.serve stop -``` -stop参数发送SIGINT至C++ Serving,若改成kill则发送SIGKILL信号至C++ Serving ### 2.自定义配置启动 @@ -317,20 +313,6 @@ fetch_var { ## Python Pipeline -### 快速启动与关闭 -Python Pipeline启动命令如下: - -```BASH -python3 web_service.py -``` - -当您想要关闭Serving服务时. -```BASH -python3 -m paddle_serving_server.serve stop -``` -stop参数发送SIGINT至Pipeline Serving,若改成kill则发送SIGKILL信号至Pipeline Serving - -### 配置文件 Python Pipeline提供了用户友好的多模型组合服务编程框架,适用于多模型组合应用的场景。 其配置文件为YAML格式,一般默认为config.yaml。示例如下: ```YAML @@ -472,4 +454,4 @@ Python Pipeline支持低精度推理,CPU、GPU和TensoRT支持的精度类型 #GPU 支持: "fp32"(default), "fp16(TensorRT)", "int8"; #CPU 支持: "fp32"(default), "fp16", "bf16"(mkldnn); 不支持: "int8" precision: "fp32" -``` +``` \ No newline at end of file diff --git a/doc/Serving_Design_EN.md b/doc/Serving_Design_EN.md index 895e55c5bf21c26dd55e5f509a392d0ec152195d..d7cd9ac8b5b88a8550f5a74c897bc15dc218b6bc 100644 --- a/doc/Serving_Design_EN.md +++ b/doc/Serving_Design_EN.md @@ -128,7 +128,7 @@ Paddle Serving uses a symmetric encryption algorithm to encrypt the model, and d ### 3.5 A/B Test -After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](C++_Serving/ABTEST_EN.md) for specific examples. +After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](C++_Serving/ABTest_EN.md) for specific examples.


diff --git a/doc/Windows_Tutorial_CN.md b/doc/Windows_Tutorial_CN.md index 76f87bae2c532a114031d6b15facfae217525b93..bb419fc7e314950815d4d5a5c902f0788403e7b1 100644 --- a/doc/Windows_Tutorial_CN.md +++ b/doc/Windows_Tutorial_CN.md @@ -1,6 +1,6 @@ ## Windows平台使用Paddle Serving指导 -([English](./Windows_Turtial_EN.md)|简体中文) +([English](./Windows_Tutorial_EN.md)|简体中文) ### 综述 diff --git a/doc/images/qq_group_1.png b/doc/images/qq_group_1.png new file mode 100644 index 0000000000000000000000000000000000000000..7e1fff13b8ef3b81cec84fe3721dcc6ce01bc316 Binary files /dev/null and b/doc/images/qq_group_1.png differ diff --git a/doc/images/wechat_group_1.jpeg b/doc/images/wechat_group_1.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..dd5c55e04d60f271c0d9d7e3bc9ee12ae92ea149 Binary files /dev/null and b/doc/images/wechat_group_1.jpeg differ diff --git a/java/README.md b/java/README.md index 9cf65ae627d4491952889a90999c6619ab851833..3a6da098bba55be4d87b666810ca8415107cb40e 100644 --- a/java/README.md +++ b/java/README.md @@ -73,7 +73,7 @@ java -cp paddle-serving-sdk-java-examples-0.0.1-jar-with-dependencies.jar Pipeli 1.In the example, all models(not pipeline) need to use `--use_multilang` to start GRPC multi-programming language support, and the port number is 9393. If you need another port, you need to modify it in the java file -2.Currently Serving has launched the Pipeline mode (see [Pipeline Serving](../doc/PIPELINE_SERVING.md) for details). Pipeline Serving Client for Java is released. +2.Currently Serving has launched the Pipeline mode (see [Pipeline Serving](../doc/Python_Pipeline/Pipeline_Design_EN.md) for details). Pipeline Serving Client for Java is released. 3.The parameters`ip` and`port` in PipelineClientExample.java(path:java/examples/src/main/java/[PipelineClientExample.java](./examples/src/main/java/PipelineClientExample.java)),needs to be connected with the corresponding pipeline server parameters`ip` and`port` which is defined in the config.yaml(Taking IMDB model ensemble as an example,path:python/examples/pipeline/imdb_model_ensemble/[config.yaml](../python/examples/pipeline/imdb_model_ensemble/config.yml)) diff --git a/java/README_CN.md b/java/README_CN.md index ef53ac9b1b020940679db9fecbfe1d33111b79f1..6c2465ecceaea135795cb75ce87afb6e78b8f90e 100755 --- a/java/README_CN.md +++ b/java/README_CN.md @@ -100,7 +100,7 @@ java -cp paddle-serving-sdk-java-examples-0.0.1-jar-with-dependencies.jar Pipeli 1.在示例中,端口号都是9393,ip默认设置为了127.0.0.1表示本机,注意ip和port需要与Server端对应。 -2.目前Serving已推出Pipeline模式(原理详见[Pipeline Serving](../doc/PIPELINE_SERVING_CN.md)),面向Java的Pipeline Serving Client已发布。 +2.目前Serving已推出Pipeline模式(原理详见[Pipeline Serving](../doc/Python_Pipeline/Pipeline_Design_CN.md)),面向Java的Pipeline Serving Client已发布。 3.注意PipelineClientExample.java中的ip和port(位于java/examples/src/main/java/[PipelineClientExample.java](./examples/src/main/java/PipelineClientExample.java)),需要与对应Pipeline server的config.yaml文件中配置的ip和port相对应。(以IMDB model ensemble模型为例,位于python/examples/pipeline/imdb_model_ensemble/[config.yaml](../python/examples/pipeline/imdb_model_ensemble/config.yml)) diff --git a/python/examples/ocr/README.md b/python/examples/ocr/README.md index 95cc210a7e68d5582e68460f2eec89419bf7fd7c..cf67fe81060100cb4e773fc93a1cb14a1180d2b8 100755 --- a/python/examples/ocr/README.md +++ b/python/examples/ocr/README.md @@ -101,7 +101,7 @@ python3 rec_web_client.py ## C++ OCR Service -**Notice:** If you need to concatenate det model and rec model, and do pre-processing and post-processing in Paddle Serving C++ framework, you need to use the C++ server compiled with WITH_OPENCV option,see the [COMPILE.md](../../../doc/COMPILE.md) +**Notice:** If you need to concatenate det model and rec model, and do pre-processing and post-processing in Paddle Serving C++ framework, you need to use the C++ server compiled with WITH_OPENCV option,see the [COMPILE.md](../../../doc/Compile_EN.md) ### Start Service Select a startup mode according to CPU / GPU device diff --git a/python/examples/ocr/README_CN.md b/python/examples/ocr/README_CN.md index 5c0734c94aa6d61e1fdb9e8f87d5ee187c805ff0..472b9423e8809ba06b15c3ced7a4e00fe2655b78 100755 --- a/python/examples/ocr/README_CN.md +++ b/python/examples/ocr/README_CN.md @@ -100,7 +100,7 @@ python3 rec_web_client.py ``` ## C++ OCR Service服务 -**注意:** 若您需要使用Paddle Serving C++框架串联det模型和rec模型,并进行前后处理,您需要使用开启WITH_OPENCV选项编译的C++ Server,详见[COMPILE.md](../../../doc/COMPILE.md) +**注意:** 若您需要使用Paddle Serving C++框架串联det模型和rec模型,并进行前后处理,您需要使用开启WITH_OPENCV选项编译的C++ Server,详见[COMPILE.md](../../../doc/Compile_CN.md) ### 启动服务 根据CPU/GPU设备选择一种启动方式