From fd111299ac5b29f6e1d9d37755ad9accc359a9f2 Mon Sep 17 00:00:00 2001 From: TeslaZhao Date: Mon, 15 Nov 2021 19:18:40 +0800 Subject: [PATCH] Merge pull request #1507 from ShiningZhang/develop fix deadlink --- README.md | 2 +- README_CN.md | 6 +++--- core/preprocess/hwvideoframe/README.md | 2 +- doc/Compile_CN.md | 4 ++-- doc/Docker_Images_EN.md | 2 +- doc/Latest_Packages_CN.md | 2 +- doc/Serving_Design_EN.md | 2 +- doc/Windows_Tutorial_CN.md | 2 +- java/README.md | 2 +- java/README_CN.md | 2 +- python/examples/ocr/README.md | 2 +- python/examples/ocr/README_CN.md | 2 +- 12 files changed, 15 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index f0c86f08..0973e9c1 100755 --- a/README.md +++ b/README.md @@ -70,7 +70,7 @@ The first step is to call the model save interface to generate a model parameter - [Description of configuration and startup parameters](doc/Serving_Configure_EN.md) - [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Introduction_CN.md) - [Infer on quantizative models](doc/Low_Precision_CN.md) -- [Data format of classic models](doc/Process_Data_CN.md) +- [Data format of classic models](doc/Process_data_CN.md) - [C++ Serving](doc/C++_Serving/Introduction_CN.md) - [protocols](doc/C++_Serving/Inference_Protocols_CN.md) - [Hot loading models](doc/C++_Serving/Hot_Loading_EN.md) diff --git a/README_CN.md b/README_CN.md index 397cb184..011be5b9 100644 --- a/README_CN.md +++ b/README_CN.md @@ -55,8 +55,8 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发 此章节引导您完成安装和部署步骤,强烈推荐使用Docker部署Paddle Serving,如您不使用docker,省略docker相关步骤。在云服务器上可以使用Kubernetes部署Paddle Serving。在异构硬件如ARM CPU、昆仑XPU上编译或使用Paddle Serving可以下面的文档。每天编译生成develop分支的最新开发包供开发者使用。 - [使用docker安装Paddle Serving](doc/Install_CN.md) - [源码编译安装Paddle Serving](doc/Compile_CN.md) -- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes.md) -- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker.md) +- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes_CN.md) +- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker_CN.md) - [在异构硬件部署Paddle Serving](doc/Run_On_XPU_CN.md) - [最新Wheel开发包](doc/Latest_Packages_CN.md)(develop分支每日更新) @@ -136,7 +136,7 @@ Paddle Serving与Paddle模型套件紧密配合,实现大量服务化部署, > 贡献代码 -如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/Contribute.md) +如果您想为Paddle Serving贡献代码,请参考 [Contribution Guidelines](doc/Contribute_EN.md) - 感谢 [@loveululu](https://github.com/loveululu) 提供 Cube python API - 感谢 [@EtachGu](https://github.com/EtachGu) 更新 docker 使用命令 - 感谢 [@BeyondYourself](https://github.com/BeyondYourself) 提供grpc教程,更新FAQ教程,整理文件目录。 diff --git a/core/preprocess/hwvideoframe/README.md b/core/preprocess/hwvideoframe/README.md index 5d561c75..0a7355a4 100644 --- a/core/preprocess/hwvideoframe/README.md +++ b/core/preprocess/hwvideoframe/README.md @@ -54,7 +54,7 @@ Hwvideoframe provides a variety of data preprocessing methods for photo preproce ## Quick start -[After compiling from code](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md),this project will be stored in reader。 +[After compiling from code](../../../doc/Compile_EN.md),this project will be stored in reader。 ## How to Test diff --git a/doc/Compile_CN.md b/doc/Compile_CN.md index fca4627c..aadf9fa5 100644 --- a/doc/Compile_CN.md +++ b/doc/Compile_CN.md @@ -23,7 +23,7 @@ | libSM | 1.2.2 | | libXrender | 0.9.10 | -推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境并配置好了上述编译依赖,详见[该文档](DOCKER_IMAGES_CN.md)。 +推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境并配置好了上述编译依赖,详见[该文档](Docker_Images_CN.md)。 ## 获取代码 @@ -158,7 +158,7 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \ make -j10 ``` -**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE_CN.md#注意事项)。 +**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](#注意事项)。 ## 编译Client部分 diff --git a/doc/Docker_Images_EN.md b/doc/Docker_Images_EN.md index ad645708..c9b64eec 100644 --- a/doc/Docker_Images_EN.md +++ b/doc/Docker_Images_EN.md @@ -76,7 +76,7 @@ Develop Images: Running Images: -Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](PADDLE_SERVING_ON_KUBERNETES.md). +Running Images is lighter than Develop Images, and Running Images are made up with serving whl and bin, but without develop tools like cmake because of lower image size. If you want to know about it, plese check the document [Paddle Serving on Kubernetes.](Run_On_Kubernetes_CN.md). | ENV | Python Version | Tag | |------------------------------------------|----------------|-----------------------------| diff --git a/doc/Latest_Packages_CN.md b/doc/Latest_Packages_CN.md index fbeb00d7..022ae75a 100644 --- a/doc/Latest_Packages_CN.md +++ b/doc/Latest_Packages_CN.md @@ -41,7 +41,7 @@ https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.0.0-py3-n ``` ## Baidu Kunlun user -for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./DOCKER_IMAGES.md) +for kunlun user who uses arm-xpu or x86-xpu can download the wheel packages as follows. Users should use the xpu-beta docker [DOCKER IMAGES](./Docker_Images_CN.md) **We only support Python 3.6 for Kunlun Users.** ### Wheel Package Links diff --git a/doc/Serving_Design_EN.md b/doc/Serving_Design_EN.md index 895e55c5..d7cd9ac8 100644 --- a/doc/Serving_Design_EN.md +++ b/doc/Serving_Design_EN.md @@ -128,7 +128,7 @@ Paddle Serving uses a symmetric encryption algorithm to encrypt the model, and d ### 3.5 A/B Test -After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](C++_Serving/ABTEST_EN.md) for specific examples. +After sufficient offline evaluation of the model, online A/B test is usually needed to decide whether to enable the service on a large scale. The following figure shows the basic structure of A/B test with Paddle Serving. After the client is configured with the corresponding configuration, the traffic will be automatically distributed to different servers to achieve A/B test. Please refer to [ABTEST in Paddle Serving](C++_Serving/ABTest_EN.md) for specific examples.


diff --git a/doc/Windows_Tutorial_CN.md b/doc/Windows_Tutorial_CN.md index 76f87bae..bb419fc7 100644 --- a/doc/Windows_Tutorial_CN.md +++ b/doc/Windows_Tutorial_CN.md @@ -1,6 +1,6 @@ ## Windows平台使用Paddle Serving指导 -([English](./Windows_Turtial_EN.md)|简体中文) +([English](./Windows_Tutorial_EN.md)|简体中文) ### 综述 diff --git a/java/README.md b/java/README.md index 9cf65ae6..3a6da098 100644 --- a/java/README.md +++ b/java/README.md @@ -73,7 +73,7 @@ java -cp paddle-serving-sdk-java-examples-0.0.1-jar-with-dependencies.jar Pipeli 1.In the example, all models(not pipeline) need to use `--use_multilang` to start GRPC multi-programming language support, and the port number is 9393. If you need another port, you need to modify it in the java file -2.Currently Serving has launched the Pipeline mode (see [Pipeline Serving](../doc/PIPELINE_SERVING.md) for details). Pipeline Serving Client for Java is released. +2.Currently Serving has launched the Pipeline mode (see [Pipeline Serving](../doc/Python_Pipeline/Pipeline_Design_EN.md) for details). Pipeline Serving Client for Java is released. 3.The parameters`ip` and`port` in PipelineClientExample.java(path:java/examples/src/main/java/[PipelineClientExample.java](./examples/src/main/java/PipelineClientExample.java)),needs to be connected with the corresponding pipeline server parameters`ip` and`port` which is defined in the config.yaml(Taking IMDB model ensemble as an example,path:python/examples/pipeline/imdb_model_ensemble/[config.yaml](../python/examples/pipeline/imdb_model_ensemble/config.yml)) diff --git a/java/README_CN.md b/java/README_CN.md index ef53ac9b..6c2465ec 100755 --- a/java/README_CN.md +++ b/java/README_CN.md @@ -100,7 +100,7 @@ java -cp paddle-serving-sdk-java-examples-0.0.1-jar-with-dependencies.jar Pipeli 1.在示例中,端口号都是9393,ip默认设置为了127.0.0.1表示本机,注意ip和port需要与Server端对应。 -2.目前Serving已推出Pipeline模式(原理详见[Pipeline Serving](../doc/PIPELINE_SERVING_CN.md)),面向Java的Pipeline Serving Client已发布。 +2.目前Serving已推出Pipeline模式(原理详见[Pipeline Serving](../doc/Python_Pipeline/Pipeline_Design_CN.md)),面向Java的Pipeline Serving Client已发布。 3.注意PipelineClientExample.java中的ip和port(位于java/examples/src/main/java/[PipelineClientExample.java](./examples/src/main/java/PipelineClientExample.java)),需要与对应Pipeline server的config.yaml文件中配置的ip和port相对应。(以IMDB model ensemble模型为例,位于python/examples/pipeline/imdb_model_ensemble/[config.yaml](../python/examples/pipeline/imdb_model_ensemble/config.yml)) diff --git a/python/examples/ocr/README.md b/python/examples/ocr/README.md index 95cc210a..cf67fe81 100755 --- a/python/examples/ocr/README.md +++ b/python/examples/ocr/README.md @@ -101,7 +101,7 @@ python3 rec_web_client.py ## C++ OCR Service -**Notice:** If you need to concatenate det model and rec model, and do pre-processing and post-processing in Paddle Serving C++ framework, you need to use the C++ server compiled with WITH_OPENCV option,see the [COMPILE.md](../../../doc/COMPILE.md) +**Notice:** If you need to concatenate det model and rec model, and do pre-processing and post-processing in Paddle Serving C++ framework, you need to use the C++ server compiled with WITH_OPENCV option,see the [COMPILE.md](../../../doc/Compile_EN.md) ### Start Service Select a startup mode according to CPU / GPU device diff --git a/python/examples/ocr/README_CN.md b/python/examples/ocr/README_CN.md index 5c0734c9..472b9423 100755 --- a/python/examples/ocr/README_CN.md +++ b/python/examples/ocr/README_CN.md @@ -100,7 +100,7 @@ python3 rec_web_client.py ``` ## C++ OCR Service服务 -**注意:** 若您需要使用Paddle Serving C++框架串联det模型和rec模型,并进行前后处理,您需要使用开启WITH_OPENCV选项编译的C++ Server,详见[COMPILE.md](../../../doc/COMPILE.md) +**注意:** 若您需要使用Paddle Serving C++框架串联det模型和rec模型,并进行前后处理,您需要使用开启WITH_OPENCV选项编译的C++ Server,详见[COMPILE.md](../../../doc/Compile_CN.md) ### 启动服务 根据CPU/GPU设备选择一种启动方式 -- GitLab