diff --git a/README.md b/README.md
index 6027d1fd5b32fee64ffd1a67442f08fd7c50499e..cab03e006094a145c8412f67ee6e909a14f59fde 100644
--- a/README.md
+++ b/README.md
@@ -31,6 +31,15 @@
We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you are also capable to deploy the model online easily. A demo of Paddle Serving is as follows:
+
Some Key Features of Paddle Serving
+
+- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
+- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
+- **Highly concurrent and efficient communication** between clients and servers supported.
+- **Multiple programming languages** supported on client side, such as C++, python and Java.
+
+***
+
- Any model trained by [PaddlePaddle](https://github.com/paddlepaddle/paddle) can be directly used or [Model Conversion Interface](./doc/SAVE_CN.md) for online deployment of Paddle Serving.
- Support [Multi-model Pipeline Deployment](./doc/PIPELINE_SERVING.md), and provide the requirements of the REST interface and RPC interface itself, [Pipeline example](./python/examples/pipeline).
- Support the model zoos from the Paddle ecosystem, such as [PaddleDetection](./python/examples/detection), [PaddleOCR](./python/examples/ocr), [PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/tools/recserving/movie_recommender).
@@ -197,14 +206,6 @@ the response is
{"result":{"price":[[18.901151657104492]]}}
```
-Some Key Features of Paddle Serving
-
-- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed **with one line command**.
-- **Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-- **Distributed Key-Value indexing** supported which is especially useful for large scale sparse features as model inputs.
-- **Highly concurrent and efficient communication** between clients and servers supported.
-- **Multiple programming languages** supported on client side, such as Golang, C++ and python.
-
Document
### New to Paddle Serving
diff --git a/README_CN.md b/README_CN.md
index cf9fb5de7113edea60c12bb58e720bfa4251b7a7..b658d6194785bbf2dd14c7ebd21d11048261dbe1 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -33,6 +33,15 @@
Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务。 **本项目目标**: 当用户使用 [Paddle](https://github.com/PaddlePaddle/Paddle) 训练了一个深度神经网络,就同时拥有了该模型的预测服务。
+Paddle Serving的核心功能
+
+- 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**.
+- 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等.
+- 支持客户端和服务端之间 **高并发和高效通信**.
+- 支持 **多种编程语言** 开发客户端,例如C++, Python和Java.
+
+***
+
- 任何经过[PaddlePaddle](https://github.com/paddlepaddle/paddle)训练的模型,都可以经过直接保存或是[模型转换接口](./doc/SAVE_CN.md),用于Paddle Serving在线部署。
- 支持[多模型串联服务部署](./doc/PIPELINE_SERVING_CN.md), 同时提供Rest接口和RPC接口以满足您的需求,[Pipeline示例](./python/examples/pipeline)。
- 支持Paddle生态的各大模型库, 例如[PaddleDetection](./python/examples/detection),[PaddleOCR](./python/examples/ocr),[PaddleRec](https://github.com/PaddlePaddle/PaddleRec/tree/master/tools/recserving/movie_recommender)。
@@ -198,14 +207,6 @@ curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"x": [0.0137, -0.1
{"result":{"price":[[18.901151657104492]]}}
```
-Paddle Serving的核心功能
-
-- 与Paddle训练紧密连接,绝大部分Paddle模型可以 **一键部署**.
-- 支持 **工业级的服务能力** 例如模型管理,在线加载,在线A/B测试等.
-- 支持 **分布式键值对索引** 助力于大规模稀疏特征作为模型输入.
-- 支持客户端和服务端之间 **高并发和高效通信**.
-- 支持 **多种编程语言** 开发客户端,例如Golang,C++和Python.
-
文档
### 新手教程