diff --git a/README.md b/README.md index 8184a3b0dfa7271c96b9b34fd7ed1a1900529a4c..4b63ce9c8aead852a76404f4268fc8131c060729 100644 --- a/README.md +++ b/README.md @@ -42,7 +42,7 @@ We consider deploying deep learning inference service online to be a user-facing - Any model trained by [PaddlePaddle](https://github.com/paddlepaddle/paddle) can be directly used or [Model Conversion Interface](./doc/SAVE.md) for online deployment of Paddle Serving. - Support [Multi-model Pipeline Deployment](./doc/PIPELINE_SERVING.md), and provide the requirements of the REST interface and RPC interface itself, [Pipeline example](./python/examples/pipeline). -- Support the model zoos from the Paddle ecosystem, such as [PaddleDetection](./python/examples/detection), [PaddleOCR](./python/examples/ocr), [PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/tools/recserving/movie_recommender). +- Support the model zoos from the Paddle ecosystem, such as [PaddleDetection](./python/examples/detection), [PaddleOCR](./python/examples/ocr), [PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/recserving/movie_recommender). - Provide a variety of pre-processing and post-processing to facilitate users in training, deployment and other stages of related code, bridging the gap between AI developers and application developers, please refer to [Serving Examples](./python/examples/). diff --git a/README_CN.md b/README_CN.md index d166d7c0ffb558ae309afb1fec572ad79ab5f679..4ee2c9863dfbc6f4531d0ec00ca92aacc19e769e 100644 --- a/README_CN.md +++ b/README_CN.md @@ -44,7 +44,7 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务 - 任何经过[PaddlePaddle](https://github.com/paddlepaddle/paddle)训练的模型,都可以经过直接保存或是[模型转换接口](./doc/SAVE_CN.md),用于Paddle Serving在线部署。 - 支持[多模型串联服务部署](./doc/PIPELINE_SERVING_CN.md), 同时提供Rest接口和RPC接口以满足您的需求,[Pipeline示例](./python/examples/pipeline)。 -- 支持Paddle生态的各大模型库, 例如[PaddleDetection](./python/examples/detection),[PaddleOCR](./python/examples/ocr),[PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/tools/recserving/movie_recommender)。 +- 支持Paddle生态的各大模型库, 例如[PaddleDetection](./python/examples/detection),[PaddleOCR](./python/examples/ocr),[PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/recserving/movie_recommender)。 - 提供丰富多彩的前后处理,方便用户在训练、部署等各阶段复用相关代码,弥合AI开发者和应用开发者之间的鸿沟,详情参考[模型示例](./python/examples/)。
diff --git a/doc/ABTEST_IN_PADDLE_SERVING.md b/doc/ABTEST_IN_PADDLE_SERVING.md
index 09d13c12583ddfa0f1767cf46a309cac5ef86867..71cd267f76705583fed0ffbb57fda7a1039cbba6 100644
--- a/doc/ABTEST_IN_PADDLE_SERVING.md
+++ b/doc/ABTEST_IN_PADDLE_SERVING.md
@@ -4,7 +4,7 @@
This document will use an example of text classification task based on IMDB dataset to show how to build a A/B Test framework using Paddle Serving. The structure relationship between the client and servers in the example is shown in the figure below.
-
+
Note that: A/B Test is only applicable to RPC mode, not web mode.
@@ -88,7 +88,7 @@ with open('processed.data') as f:
cnt[tag]['total'] += 1
for tag, data in cnt.items():
- print('[{}](total: {}) acc: {}'.format(tag, data['total'], float(data['acc']) / float(data['total'])))
+ print('[{}]
diff --git a/doc/SERVER_DAG_CN.md b/doc/SERVER_DAG_CN.md
index 80d01f0287c5f721f093e96c7bcd1827f0601496..16e53bc6a98af6abd4a114137f2e72593242afcd 100644
--- a/doc/SERVER_DAG_CN.md
+++ b/doc/SERVER_DAG_CN.md
@@ -47,7 +47,7 @@ python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --po
### 包含多个输入的节点
-在[Paddle Serving中的集成预测](MODEL_ENSEMBLE_IN_PADDLE_SERVING_CN.md)文档中给出了一个包含多个输入节点的样例,示意图和代码如下。
+在[Paddle Serving中的集成预测](./deprecated/MODEL_ENSEMBLE_IN_PADDLE_SERVING_CN.md)文档中给出了一个包含多个输入节点的样例,示意图和代码如下。
diff --git a/doc/TENSOR_RT.md b/doc/TENSOR_RT.md
index 6e53a6ff029df6a46080d656a6dc9db95a9633e3..7504646fea750572cde472ebfb6178989b542ec1 100644
--- a/doc/TENSOR_RT.md
+++ b/doc/TENSOR_RT.md
@@ -1,6 +1,6 @@
## Paddle Serving uses TensorRT
-(English|[简体中文]((./TENSOR_RT_CN.md)))
+(English|[简体中文](./TENSOR_RT_CN.md))
### Background
diff --git a/doc/WINDOWS_TUTORIAL_CN.md b/doc/WINDOWS_TUTORIAL_CN.md
index 4184840f4e5646fcd998dfa33b80b8b9210b05d7..143d3b22ff0d2a6c9b35542ac301fd2a906a0962 100644
--- a/doc/WINDOWS_TUTORIAL_CN.md
+++ b/doc/WINDOWS_TUTORIAL_CN.md
@@ -14,7 +14,7 @@
**安装Git工具**: 详情参见[Git官网](https://git-scm.com/downloads)
-**安装必要的C++库(可选)**:部分用户可能会在`import paddle`阶段遇见dll无法链接的问题,建议可以[安装Visual Studio社区版本](`https://visualstudio.microsoft.com/`) ,并且安装C++的相关组件。
+**安装必要的C++库(可选)**:部分用户可能会在`import paddle`阶段遇见dll无法链接的问题,建议[安装Visual Studio社区版本](https://visualstudio.microsoft.com/) ,并且安装C++的相关组件。
**安装Paddle和Serving**:在Powershell,执行
diff --git a/doc/deprecated/DESIGN.md b/doc/deprecated/DESIGN.md
index d14bb0569b5b9e236367d1f4eb61d1774c100511..a6f17c044549908d739e52354d4e3aba92fa0c19 100644
--- a/doc/deprecated/DESIGN.md
+++ b/doc/deprecated/DESIGN.md
@@ -115,7 +115,7 @@ Server instance perspective

-Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
+Paddle Serving instances can load multiple models at the same time, and each model uses a Service (and its configured workflow) to undertake services. You can refer to [service configuration file in Demo example](../../tools/cpp_examples/demo-serving/conf/service.prototxt) to learn how to configure multiple services for the serving instance
#### 4.2.3 Hierarchical relationship of business scheduling
@@ -124,7 +124,7 @@ From the client's perspective, a Paddle Serving service can be divided into thre

One Service corresponds to one inference model, and there is one endpoint under the model. Different versions of the model are implemented through multiple variant concepts under endpoint:
-The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) section 3.2).
+The same model prediction service can configure multiple variants, and each variant has its own downstream IP list. The client code can configure relative weights for each variant to achieve the relationship of adjusting the traffic ratio (refer to the description of variant_weight_list in [Client Configuration](../CLIENT_CONFIGURE.md) section 3.2).

@@ -141,7 +141,7 @@ No matter how the communication protocol changes, the framework only needs to en
### 5.1 Data Compression Method
-Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](./deprecated/CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
+Baidu-rpc has built-in data compression methods such as snappy, gzip, zlib, which can be configured in the configuration file (refer to [Client Configuration](../CLIENT_CONFIGURE.md) Section 3.1 for an introduction to compress_type)
### 5.2 C ++ SDK API Interface