diff --git a/README.md b/README.md
index f8e70c5f520e3e55c02f4a1cbdf368324f4f5e24..81d0c6045f01ccb79eea6f32261edf2bc20e1931 100644
--- a/README.md
+++ b/README.md
@@ -21,7 +21,7 @@
- [Motivation](./README.md#motivation)
-- [AIStudio Tutorial](./README.md#aistudio-tutorial)
+- [AIStudio Tutorial](./README.md#aistuio-tutorial)
- [Installation](./README.md#installation)
- [Quick Start Example](./README.md#quick-start-example)
- [Document](README.md#document)
@@ -42,7 +42,7 @@ We consider deploying deep learning inference service online to be a user-facing
-AIStudio Tutorial
+AIStudio Turorial
Here we provide tutorial on AIStudio(Chinese Version) [AIStudio教程-Paddle Serving服务化部署框架](https://aistudio.baidu.com/aistudio/projectdetail/1550674)
@@ -75,15 +75,15 @@ We **highly recommend** you to **run Paddle Serving in Docker**, please visit [R
```
# Run CPU Docker
-docker pull hub.baidubce.com/paddlepaddle/serving:0.5.0-devel
-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.5.0-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:0.5.0-devel
+docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.5.0-devel
docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
```
# Run GPU Docker
-nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
-nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
+nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
+nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
nvidia-docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
diff --git a/README_CN.md b/README_CN.md
index 2c4a377d8bbe3be7799fcd075397914cc1473ac3..faf931d217e5d8ec9d8afbbb4acf64cdc80fcb8e 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -76,15 +76,15 @@ Paddle Serving开发者为您提供了简单易用的[AIStudio教程-Paddle Serv
```
# 启动 CPU Docker
-docker pull hub.baidubce.com/paddlepaddle/serving:0.5.0-devel
-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.5.0-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:0.5.0-devel
+docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.5.0-devel
docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
```
# 启动 GPU Docker
-nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
-nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
+nvidia-docker pull registry.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
+nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:0.5.0-cuda10.2-cudnn8-devel
nvidia-docker exec -it test bash
git clone https://github.com/PaddlePaddle/Serving
```
diff --git a/doc/ABTEST_IN_PADDLE_SERVING.md b/doc/ABTEST_IN_PADDLE_SERVING.md
index d901fe8d3141b25731321bc2e3a3df000f9fd6a1..09d13c12583ddfa0f1767cf46a309cac5ef86867 100644
--- a/doc/ABTEST_IN_PADDLE_SERVING.md
+++ b/doc/ABTEST_IN_PADDLE_SERVING.md
@@ -36,7 +36,7 @@ Here, we [use docker](RUN_IN_DOCKER.md) to start the server-side service.
First, start the BOW server, which enables the `8000` port:
``` shell
-docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:latest /bin/bash
+docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server registry.baidubce.com/paddlepaddle/serving:latest /bin/bash
docker exec -it bow-server /bin/bash
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddle-serving-client -i https://pypi.tuna.tsinghua.edu.cn/simple
@@ -47,7 +47,7 @@ exit
Similarly, start the LSTM server, which enables the `9000` port:
```bash
-docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:latest /bin/bash
+docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server registry.baidubce.com/paddlepaddle/serving:latest /bin/bash
docker exec -it lstm-server /bin/bash
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddle-serving-client -i https://pypi.tuna.tsinghua.edu.cn/simple
diff --git a/doc/ABTEST_IN_PADDLE_SERVING_CN.md b/doc/ABTEST_IN_PADDLE_SERVING_CN.md
index 074960381858c64a2517a86553e295052b55e6a5..de2e57513f10a6a55a3e3baf48a4e0ba49c34ee7 100644
--- a/doc/ABTEST_IN_PADDLE_SERVING_CN.md
+++ b/doc/ABTEST_IN_PADDLE_SERVING_CN.md
@@ -35,7 +35,7 @@ pip install Shapely
首先启动BOW Server,该服务启用`8000`端口:
```bash
-docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:latest /bin/bash
+docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server registry.baidubce.com/paddlepaddle/serving:latest /bin/bash
docker exec -it bow-server /bin/bash
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddle-serving-client -i https://pypi.tuna.tsinghua.edu.cn/simple
@@ -46,7 +46,7 @@ exit
同理启动LSTM Server,该服务启用`9000`端口:
```bash
-docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:latest /bin/bash
+docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server registry.baidubce.com/paddlepaddle/serving:latest /bin/bash
docker exec -it lstm-server /bin/bash
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddle-serving-client -i https://pypi.tuna.tsinghua.edu.cn/simple
diff --git a/doc/DOCKER_IMAGES.md b/doc/DOCKER_IMAGES.md
index 54cbe2ef94c980fec3dfd823deeccd4358ebc813..cf5023c18365bd1994f1192d6faae0c77f3c49db 100644
--- a/doc/DOCKER_IMAGES.md
+++ b/doc/DOCKER_IMAGES.md
@@ -46,12 +46,12 @@ If you want to customize your Serving based on source code, use the version with
**Java Client:**
```
-hub.baidubce.com/paddlepaddle/serving:latest-java
+registry.baidubce.com/paddlepaddle/serving:latest-java
```
**XPU:**
```
-hub.baidubce.com/paddlepaddle/serving:xpu-beta
+registry.baidubce.com/paddlepaddle/serving:xpu-beta
```
## Requirements for running CUDA containers
diff --git a/doc/DOCKER_IMAGES_CN.md b/doc/DOCKER_IMAGES_CN.md
index fc6e6e3b879751b52b1491bbd666749bb1243450..d84b6f8d2834480c7ab8228309fbe1e75a375d63 100644
--- a/doc/DOCKER_IMAGES_CN.md
+++ b/doc/DOCKER_IMAGES_CN.md
@@ -49,12 +49,12 @@
**Java镜像:**
```
-hub.baidubce.com/paddlepaddle/serving:latest-java
+registry.baidubce.com/paddlepaddle/serving:latest-java
```
**XPU镜像:**
```
-hub.baidubce.com/paddlepaddle/serving:xpu-beta
+registry.baidubce.com/paddlepaddle/serving:xpu-beta
```
diff --git a/doc/RUN_IN_DOCKER.md b/doc/RUN_IN_DOCKER.md
index 335ff8a2cd76762883154a1b0decdd6edc68cad1..debe08a96b9512759473022c094d9115b24805fa 100644
--- a/doc/RUN_IN_DOCKER.md
+++ b/doc/RUN_IN_DOCKER.md
@@ -17,14 +17,14 @@ This document takes Python2 as an example to show how to run Paddle Serving in d
Refer to [this document](DOCKER_IMAGES.md) for a docker image:
```shell
-docker pull hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:latest-devel
```
### Create container
```bash
-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:latest-devel
docker exec -it test bash
```
@@ -46,20 +46,20 @@ The GPU version is basically the same as the CPU version, with only some differe
Refer to [this document](DOCKER_IMAGES.md) for a docker image, the following is an example of an `cuda9.0-cudnn7` image:
```shell
-docker pull hub.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
```
### Create container
```bash
-nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
+nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
nvidia-docker exec -it test bash
```
or
```bash
-docker run --gpus all -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
+docker run --gpus all -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
docker exec -it test bash
```
diff --git a/doc/RUN_IN_DOCKER_CN.md b/doc/RUN_IN_DOCKER_CN.md
index 2f083fb7c07b7cdcfa06eaa99a0635efefa4287e..7dfc3eee62b3bf8f6d925b97e3d7da3fd51ad423 100644
--- a/doc/RUN_IN_DOCKER_CN.md
+++ b/doc/RUN_IN_DOCKER_CN.md
@@ -19,13 +19,13 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker)
以CPU编译镜像为例
```shell
-docker pull hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:latest-devel
```
### 创建容器并进入
```bash
-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:latest-devel
docker exec -it test bash
```
@@ -40,18 +40,18 @@ docker exec -it test bash
## GPU 版本
```shell
-docker pull hub.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
```
### 创建容器并进入
```bash
-nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
+nvidia-docker run -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
nvidia-docker exec -it test bash
```
或者
```bash
-docker run --gpus all -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
+docker run --gpus all -p 9292:9292 --name test -dit registry.baidubce.com/paddlepaddle/serving:latest-cuda10.2-cudnn8-devel
docker exec -it test bash
```
diff --git a/doc/WINDOWS_TUTORIAL.md b/doc/WINDOWS_TUTORIAL.md
index 8450472f4b93a44475f459342e35c658f5f187c9..73cf52bb4fab14c213a13f358ed84f1e643b0734 100644
--- a/doc/WINDOWS_TUTORIAL.md
+++ b/doc/WINDOWS_TUTORIAL.md
@@ -117,9 +117,9 @@ Please refer to [Docker Desktop](https://www.docker.com/products/docker-desktop)
After installation, start the docker linux engine and download the relevant image. In the Serving directory
```
-docker pull hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:latest-devel
# There is no expose port here, users can set -p to perform port mapping as needed
-docker run --rm -dit --name serving_devel -v $PWD:/Serving hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker run --rm -dit --name serving_devel -v $PWD:/Serving registry.baidubce.com/paddlepaddle/serving:latest-devel
docker exec -it serving_devel bash
cd /Serving
```
diff --git a/doc/WINDOWS_TUTORIAL_CN.md b/doc/WINDOWS_TUTORIAL_CN.md
index d40e238d3b58e8959460f437dea7bfcb1e039c45..4184840f4e5646fcd998dfa33b80b8b9210b05d7 100644
--- a/doc/WINDOWS_TUTORIAL_CN.md
+++ b/doc/WINDOWS_TUTORIAL_CN.md
@@ -117,9 +117,9 @@ python your_client.py
安装之后启动docker的linux engine,下载相关镜像。在Serving目录下
```
-docker pull hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker pull registry.baidubce.com/paddlepaddle/serving:latest-devel
# 此处没有expose端口,用户可根据需要设置-p来进行端口映射
-docker run --rm -dit --name serving_devel -v $PWD:/Serving hub.baidubce.com/paddlepaddle/serving:latest-devel
+docker run --rm -dit --name serving_devel -v $PWD:/Serving registry.baidubce.com/paddlepaddle/serving:latest-devel
docker exec -it serving_devel bash
cd /Serving
```