diff --git a/README.md b/README.md
index 521a13d3d38b2ecfee79eccee42fb9f0a0f5327c..749b71c134beccf0000aebf7efb63ad4d4f08c1d 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
-
+
@@ -47,7 +47,7 @@ We consider deploying deep learning inference service online to be a user-facing
[Serving Examples](./python/examples/).
-
+
diff --git a/README_CN.md b/README_CN.md
index efd184eb249c5cc7604e8671a286577b3fb62641..a30b04e30d2e5805b1b5fe700ae81a70b379eaae 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -2,7 +2,7 @@
-
+
@@ -48,7 +48,7 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务
- 提供丰富多彩的前后处理,方便用户在训练、部署等各阶段复用相关代码,弥合AI开发者和应用开发者之间的鸿沟,详情参考[模型示例](./python/examples/)。
-
+
教程
diff --git a/doc/ABTEST_IN_PADDLE_SERVING.md b/doc/ABTEST_IN_PADDLE_SERVING.md
index 71cd267f76705583fed0ffbb57fda7a1039cbba6..f250f1a176c76f8baf66411fc896a4bd9f4ce040 100644
--- a/doc/ABTEST_IN_PADDLE_SERVING.md
+++ b/doc/ABTEST_IN_PADDLE_SERVING.md
@@ -4,7 +4,7 @@
This document will use an example of text classification task based on IMDB dataset to show how to build a A/B Test framework using Paddle Serving. The structure relationship between the client and servers in the example is shown in the figure below.
-
+
Note that: A/B Test is only applicable to RPC mode, not web mode.
diff --git a/doc/ABTEST_IN_PADDLE_SERVING_CN.md b/doc/ABTEST_IN_PADDLE_SERVING_CN.md
index af3cf1f83d2dbfc29101fe7bcd97fb8fbb820767..34d1525b71396220d535a38593fc99eeac84a86f 100644
--- a/doc/ABTEST_IN_PADDLE_SERVING_CN.md
+++ b/doc/ABTEST_IN_PADDLE_SERVING_CN.md
@@ -4,7 +4,7 @@
该文档将会用一个基于IMDB数据集的文本分类任务的例子,介绍如何使用Paddle Serving搭建A/B Test框架,例中的Client端、Server端结构如下图所示。
-
+
需要注意的是:A/B Test只适用于RPC模式,不适用于WEB模式。
diff --git a/doc/BERT_10_MINS.md b/doc/BERT_10_MINS.md
index 3857bc555dcd69be96d961f2acc363bac6575c50..cc356c359bf6cff525c37abe723d5c8dc73b4781 100644
--- a/doc/BERT_10_MINS.md
+++ b/doc/BERT_10_MINS.md
@@ -115,7 +115,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"feed":[{"words": "hello"}]
We tested the performance of Bert-As-Service based on Padde Serving based on V100 and compared it with the Bert-As-Service based on Tensorflow. From the perspective of user configuration, we used the same batch size and concurrent number for stress testing. The overall throughput performance data obtained under 4 V100s is as follows.
-![4v100_bert_as_service_benchmark](4v100_bert_as_service_benchmark.png)
+![4v100_bert_as_service_benchmark](images/4v100_bert_as_service_benchmark.png)