diff --git a/README.md b/README.md
index 7c6df8d5ab4463c59c1ad250383f63ac1d01529e..86d7eb5d614729d7ec3a0816d2114e5273fd7aed 100644
--- a/README.md
+++ b/README.md
@@ -34,7 +34,7 @@ We consider deploying deep learning inference service online to be a user-facing
Installation
-We highly recommend you to run Paddle Serving in Docker, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)
+We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)
```
# Run CPU Docker
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0
diff --git a/README_CN.md b/README_CN.md
index e7f976098bf10476ed8bfb1d9d031ed4854acae6..641f2eff5f8da6d513dcf5a8d0cefb851d65490a 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -35,7 +35,7 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务
安装
-强烈建议您在Docker内构建Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md)
+**强烈建议**您在**Docker内构建**Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md)
```
# 启动 CPU Docker