From 03296fc58bbb61f97e7f015983aa1d2081d5bf44 Mon Sep 17 00:00:00 2001 From: TeslaZhao Date: Tue, 16 Nov 2021 11:33:47 +0800 Subject: [PATCH] Update README.md --- README.md | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index acd3c21c..302082fb 100755 --- a/README.md +++ b/README.md @@ -57,7 +57,7 @@ This chapter guides you through the installation and deployment steps. It is str - [Install Paddle Serving using docker](doc/Install_EN.md) - [Build Paddle Serving from Source with Docker](doc/Compile_EN.md) - [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_CN.md) -- [Deploy Paddle Serving with Security gateway](doc/Serving_Auth_Docker_CN.md) +- [Deploy Paddle Serving with Security gateway](doc/Serving_Auth_Docker_CN.md)(Chinese) - [Deploy Paddle Serving on more hardwares](doc/Run_On_XPU_EN.md) - [Latest Wheel packages](doc/Latest_Packages_CN.md)(Update everyday on branch develop) @@ -68,23 +68,23 @@ The first step is to call the model save interface to generate a model parameter - [Quick Start](doc/Quick_Start_EN.md) - [Save a servable model](doc/Save_EN.md) - [Description of configuration and startup parameters](doc/Serving_Configure_EN.md) -- [Guide for RESTful/gRPC/bRPC APIs](doc/C++_Serving/Introduction_CN.md) -- [Infer on quantizative models](doc/Low_Precision_CN.md) -- [Data format of classic models](doc/Process_data_CN.md) -- [C++ Serving](doc/C++_Serving/Introduction_CN.md) - - [protocols](doc/C++_Serving/Inference_Protocols_CN.md) +- [Guide for RESTful/gRPC/bRPC APIs(Chinese)](doc/C++_Serving/Introduction_CN.md#42-多语言多协议Client) +- [Infer on quantizative models](doc/Low_Precision_EN.md) +- [Data format of classic models(Chinese)](doc/Process_data_CN.md) +- [C++ Serving(Chinese)](doc/C++_Serving/Introduction_CN.md) + - [Protocols(Chinese)](doc/C++_Serving/Inference_Protocols_CN.md) - [Hot loading models](doc/C++_Serving/Hot_Loading_EN.md) - [A/B Test](doc/C++_Serving/ABTest_EN.md) - [Encryption](doc/C++_Serving/Encryption_EN.md) - [Analyze and optimize performance(Chinese)](doc/C++_Serving/Performance_Tuning_CN.md) - [Benchmark(Chinese)](doc/C++_Serving/Benchmark_CN.md) - [Python Pipeline](doc/Python_Pipeline/Pipeline_Design_EN.md) - - [Analyze and optimize performance](doc/Python_Pipeline/Pipeline_Design_EN.md) + - [Analyze and optimize performance](doc/Python_Pipeline/Performance_Tuning_EN.md) - [Benchmark(Chinese)](doc/Python_Pipeline/Benchmark_CN.md) - Client SDK - - [Python SDK(Chinese)](doc/C++_Serving/Http_Service_CN.md) + - [Python SDK(Chinese)](doc/C++_Serving/Introduction_CN.md#42-多语言多协议Client)) - [JAVA SDK](doc/Java_SDK_EN.md) - - [C++ SDK(Chinese)](doc/C++_Serving/Creat_C++Serving_CN.md) + - [C++ SDK(Chinese)](doc/C++_Serving/Introduction_CN.md#42-多语言多协议Client) - [Large-scale sparse parameter server](doc/Cube_Local_EN.md)
@@ -111,10 +111,10 @@ Paddle Serving works closely with the Paddle model suite, and implements a large For more model examples, read [Model zoo](doc/Model_Zoo_EN.md) -
+

-

+

Community

@@ -123,15 +123,18 @@ If you want to communicate with developers and other users? Welcome to join us, ### Wechat - WeChat scavenging -
+ + +

-

+

### QQ - 飞桨推理部署交流群(Group No.:697765514) -
+ +

-

+

### Slack -- GitLab