From 7cc5fcfa085d28082e8b0405783aca6bcac8dc06 Mon Sep 17 00:00:00 2001 From: TeslaZhao Date: Mon, 15 Nov 2021 15:07:23 +0800 Subject: [PATCH] Merge pull request #1506 from TeslaZhao/develop update readme --- README.md | 8 ++++---- README_CN.md | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 87288d7e..f0c86f08 100755 --- a/README.md +++ b/README.md @@ -56,8 +56,8 @@ This chapter guides you through the installation and deployment steps. It is str - [Install Paddle Serving using docker](doc/Install_EN.md) - [Build Paddle Serving from Source with Docker](doc/Compile_EN.md) -- [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_EN.md) -- [Deploy Paddle Serving with Security gateway](doc/Serving_Auth_Docker.md) +- [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_CN.md) +- [Deploy Paddle Serving with Security gateway](doc/Serving_Auth_Docker_CN.md) - [Deploy Paddle Serving on more hardwares](doc/Run_On_XPU_EN.md) - [Latest Wheel packages](doc/Latest_Packages_CN.md)(Update everyday on branch develop) @@ -68,10 +68,10 @@ The first step is to call the model save interface to generate a model parameter - [Quick Start](doc/Quick_Start_EN.md) - [Save a servable model](doc/Save_EN.md) - [Description of configuration and startup parameters](doc/Serving_Configure_EN.md) -- [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Http_Service_EN.md) +- [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Introduction_CN.md) - [Infer on quantizative models](doc/Low_Precision_CN.md) - [Data format of classic models](doc/Process_Data_CN.md) -- [C++ Serving](doc/C++_Serving/Introduction_EN.md) +- [C++ Serving](doc/C++_Serving/Introduction_CN.md) - [protocols](doc/C++_Serving/Inference_Protocols_CN.md) - [Hot loading models](doc/C++_Serving/Hot_Loading_EN.md) - [A/B Test](doc/C++_Serving/ABTest_EN.md) diff --git a/README_CN.md b/README_CN.md index d17725aa..397cb184 100644 --- a/README_CN.md +++ b/README_CN.md @@ -64,9 +64,9 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发 安装Paddle Serving后,使用快速开始将引导您运行Serving。第一步,调用模型保存接口,生成模型参数配置文件(.prototxt)用以在客户端和服务端使用;第二步,阅读配置和启动参数并启动服务;第三步,根据API和您的使用场景,基于SDK编写客户端请求,并测试推理服务。您想了解跟多特性的使用场景和方法,请详细阅读以下文档。 - [快速开始](doc/Quick_Start_CN.md) -- [保存用于Paddle Serving的模型和配置](doc/SAVE_CN.md) +- [保存用于Paddle Serving的模型和配置](doc/Save_CN.md) - [配置和启动参数的说明](doc/Serving_Configure_CN.md) -- [RESTful/gRPC/bRPC API指南](doc/C++_Serving/Http_Service_CN.md) +- [RESTful/gRPC/bRPC API指南](doc/C++_Serving/Introduction_CN.md#4.Client端特性) - [低精度推理](doc/Low_Precision_CN.md) - [常见模型数据处理](doc/Process_data_CN.md) - [C++ Serving简介](doc/C++_Serving/Introduction_CN.md) -- GitLab