diff --git a/README.md b/README.md index f0c86f081fe0dedc493f744c3dcf09640afbe999..0973e9c11ca28b4680a7deb9adeee4abf27e0d25 100755 --- a/README.md +++ b/README.md @@ -70,7 +70,7 @@ The first step is to call the model save interface to generate a model parameter - [Description of configuration and startup parameters](doc/Serving_Configure_EN.md) - [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Introduction_CN.md) - [Infer on quantizative models](doc/Low_Precision_CN.md) -- [Data format of classic models](doc/Process_Data_CN.md) +- [Data format of classic models](doc/Process_data_CN.md) - [C++ Serving](doc/C++_Serving/Introduction_CN.md) - [protocols](doc/C++_Serving/Inference_Protocols_CN.md) - [Hot loading models](doc/C++_Serving/Hot_Loading_EN.md) diff --git a/README_CN.md b/README_CN.md index 397cb184fbf85545e7290aed2eca7a15c54ff624..011be5b9cea7a47f54e8ed2e289061c95d86b4c3 100644 --- a/README_CN.md +++ b/README_CN.md @@ -55,8 +55,8 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发 此章节引导您完成安装和部署步骤,强烈推荐使用Docker部署Paddle Serving,如您不使用docker,省略docker相关步骤。在云服务器上可以使用Kubernetes部署Paddle Serving。在异构硬件如ARM CPU、昆仑XPU上编译或使用Paddle Serving可以下面的文档。每天编译生成develop分支的最新开发包供开发者使用。 - [使用docker安装Paddle Serving](doc/Install_CN.md) - [源码编译安装Paddle Serving](doc/Compile_CN.md) -- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes.md) -- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker.md) +- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes_CN.md) +- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker_CN.md) - [在异构硬件部署Paddle Serving](doc/Run_On_XPU_CN.md) - [最新Wheel开发包](doc/Latest_Packages_CN.md)(develop分支每日更新) @@ -136,7 +136,7 @@ Paddle Serving与Paddle模型套件紧密配合,实现大量服务化部署, > 贡献代码 -如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/Contribute.md) +如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/Contribute_EN.md) - 感谢 [@loveululu](https://github.com/loveululu) 提供 Cube python API - 感谢 [@EtachGu](https://github.com/EtachGu) 更新 docker 使用命令 - 感谢 [@BeyondYourself](https://github.com/BeyondYourself) 提供grpc教程,更新FAQ教程,整理文件目录。 diff --git a/core/preprocess/hwvideoframe/README.md b/core/preprocess/hwvideoframe/README.md index 5d561c75fc6b891f9752a303630b3914dce7998b..0a7355a43ccef033c0fef993ba1cc50f5a4bd46a 100644 --- a/core/preprocess/hwvideoframe/README.md +++ b/core/preprocess/hwvideoframe/README.md @@ -54,7 +54,7 @@ Hwvideoframe provides a variety of data preprocessing methods for photo preproce ## Quick start -[After compiling from code](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md),this project will be stored in reader。 +[After compiling from code](../../../doc/Compile_EN.md),this project will be stored in reader。 ## How to Test diff --git a/doc/Compile_CN.md b/doc/Compile_CN.md index fca4627cc40f08227ce2628841dc6da9b3eddebd..aadf9fa5124c8876301c83fd043e686f78a1820c 100644 --- a/doc/Compile_CN.md +++ b/doc/Compile_CN.md @@ -23,7 +23,7 @@ | libSM | 1.2.2 | | libXrender | 0.9.10 | -推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境并配置好了上述编译依赖,详见[该文档](DOCKER_IMAGES_CN.md)。 +推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境并配置好了上述编译依赖,详见[该文档](Docker_Images_CN.md)。 ## 获取代码 @@ -158,7 +158,7 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \ make -j10 ``` -**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE_CN.md#注意事项)。 +**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](#注意事项)。 ## 编译Client部分 diff --git a/doc/Docker_Images_EN.md b/doc/Docker_Images_EN.md index ad64570830b854bb4affd192dbebb9303ce9a5e7..c9b64eecc75c1b24626151dd52920b67043b2304 100644 --- a/doc/Docker_Images_EN.md +++ b/doc/Docker_Images_EN.md @@ -76,7 +76,7 @@ Develop Images: Running Images: -Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](PADDLE_SERVING_ON_KUBERNETES.md). +Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](Run_On_Kubernetes_CN.md). | ENV | Python Version | Tag | |------------------------------------------|----------------|-----------------------------| diff --git a/doc/Latest_Packages_CN.md b/doc/Latest_Packages_CN.md index fbeb00d7c2160afa8d52f55abf1de2d2f24a830e..022ae75ab824ed8462f876f8d57b9097720cc18d 100644 --- a/doc/Latest_Packages_CN.md +++ b/doc/Latest_Packages_CN.md @@ -41,7 +41,7 @@ https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-n ``` ## Baidu Kunlun user -for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./DOCKER_IMAGES.md) +for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./Docker_Images_CN.md) **We only support Python 3.6 for Kunlun Users.** ### Wheel Package Links diff --git a/doc/Serving_Design_EN.md b/doc/Serving_Design_EN.md index 895e55c5bf21c26dd55e5f509a392d0ec152195d..d7cd9ac8b5b88a8550f5a74c897bc15dc218b6bc 100644 --- a/doc/Serving_Design_EN.md +++ b/doc/Serving_Design_EN.md @@ -128,7 +128,7 @@ Paddle Serving uses a symmetric encryption algorithm to encrypt the model, and d ### 3.5 A/B Test -After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](C++_Serving/ABTEST_EN.md) for specific examples. +After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](C++_Serving/ABTest_EN.md) for specific examples.


diff --git a/doc/Windows_Tutorial_CN.md b/doc/Windows_Tutorial_CN.md index 76f87bae2c532a114031d6b15facfae217525b93..bb419fc7e314950815d4d5a5c902f0788403e7b1 100644 --- a/doc/Windows_Tutorial_CN.md +++ b/doc/Windows_Tutorial_CN.md @@ -1,6 +1,6 @@ ## Windows平台使用Paddle Serving指导 -([English](./Windows_Turtial_EN.md)|简体中文) +([English](./Windows_Tutorial_EN.md)|简体中文) ### 综述 diff --git a/java/README.md b/java/README.md index 9cf65ae627d4491952889a90999c6619ab851833..3a6da098bba55be4d87b666810ca8415107cb40e 100644 --- a/java/README.md +++ b/java/README.md @@ -73,7 +73,7 @@ java -cp paddle-serving-sdk-java-examples-0.0.1-jar-with-dependencies.jar Pipeli 1.In the example, all models(not pipeline) need to use `--use_multilang` to start GRPC multi-programming language support, and the port number is 9393. If you need another port, you need to modify it in the java file -2.Currently Serving has launched the Pipeline mode (see [Pipeline Serving](../doc/PIPELINE_SERVING.md) for details). Pipeline Serving Client for Java is released. +2.Currently Serving has launched the Pipeline mode (see [Pipeline Serving](../doc/Python_Pipeline/Pipeline_Design_EN.md) for details). Pipeline Serving Client for Java is released. 3.The parameters`ip` and`port` in PipelineClientExample.java(path:java/examples/src/main/java/[PipelineClientExample.java](./examples/src/main/java/PipelineClientExample.java)),needs to be connected with the corresponding pipeline server parameters`ip` and`port` which is defined in the config.yaml(Taking IMDB model ensemble as an example,path:python/examples/pipeline/imdb_model_ensemble/[config.yaml](../python/examples/pipeline/imdb_model_ensemble/config.yml)) diff --git a/java/README_CN.md b/java/README_CN.md index ef53ac9b1b020940679db9fecbfe1d33111b79f1..6c2465ecceaea135795cb75ce87afb6e78b8f90e 100755 --- a/java/README_CN.md +++ b/java/README_CN.md @@ -100,7 +100,7 @@ java -cp paddle-serving-sdk-java-examples-0.0.1-jar-with-dependencies.jar Pipeli 1.在示例中,端口号都是9393,ip默认设置为了127.0.0.1表示本机,注意ip和port需要与Server端对应。 -2.目前Serving已推出Pipeline模式(原理详见[Pipeline Serving](../doc/PIPELINE_SERVING_CN.md)),面向Java的Pipeline Serving Client已发布。 +2.目前Serving已推出Pipeline模式(原理详见[Pipeline Serving](../doc/Python_Pipeline/Pipeline_Design_CN.md)),面向Java的Pipeline Serving Client已发布。 3.注意PipelineClientExample.java中的ip和port(位于java/examples/src/main/java/[PipelineClientExample.java](./examples/src/main/java/PipelineClientExample.java)),需要与对应Pipeline server的config.yaml文件中配置的ip和port相对应。(以IMDB model ensemble模型为例,位于python/examples/pipeline/imdb_model_ensemble/[config.yaml](../python/examples/pipeline/imdb_model_ensemble/config.yml)) diff --git a/python/examples/ocr/README.md b/python/examples/ocr/README.md index 95cc210a7e68d5582e68460f2eec89419bf7fd7c..cf67fe81060100cb4e773fc93a1cb14a1180d2b8 100755 --- a/python/examples/ocr/README.md +++ b/python/examples/ocr/README.md @@ -101,7 +101,7 @@ python3 rec_web_client.py ## C++ OCR Service -**Notice:** If you need to concatenate det model and rec model, and do pre-processing and post-processing in Paddle Serving C++ framework, you need to use the C++ server compiled with WITH_OPENCV option,see the [COMPILE.md](../../../doc/COMPILE.md) +**Notice:** If you need to concatenate det model and rec model, and do pre-processing and post-processing in Paddle Serving C++ framework, you need to use the C++ server compiled with WITH_OPENCV option,see the [COMPILE.md](../../../doc/Compile_EN.md) ### Start Service Select a startup mode according to CPU / GPU device diff --git a/python/examples/ocr/README_CN.md b/python/examples/ocr/README_CN.md index 5c0734c94aa6d61e1fdb9e8f87d5ee187c805ff0..472b9423e8809ba06b15c3ced7a4e00fe2655b78 100755 --- a/python/examples/ocr/README_CN.md +++ b/python/examples/ocr/README_CN.md @@ -100,7 +100,7 @@ python3 rec_web_client.py ``` ## C++ OCR Service服务 -**注意:** 若您需要使用Paddle Serving C++框架串联det模型和rec模型,并进行前后处理,您需要使用开启WITH_OPENCV选项编译的C++ Server,详见[COMPILE.md](../../../doc/COMPILE.md) +**注意:** 若您需要使用Paddle Serving C++框架串联det模型和rec模型,并进行前后处理,您需要使用开启WITH_OPENCV选项编译的C++ Server,详见[COMPILE.md](../../../doc/Compile_CN.md) ### 启动服务 根据CPU/GPU设备选择一种启动方式