diff --git a/README.md b/README.md index 17730e2a071facf7c939cb7fb686596b2b752aa6..8bc5146864676883f1fcdb6c4f10781acf3d0db7 100644 --- a/README.md +++ b/README.md @@ -37,14 +37,14 @@ We consider deploying deep learning inference service online to be a user-facing We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md) ``` # Run CPU Docker -docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 -docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker pull hub.baidubce.com/paddlepaddle/serving:latest +docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest docker exec -it test bash ``` ``` # Run GPU Docker -nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu +nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu nvidia-docker exec -it test bash ``` diff --git a/README_CN.md b/README_CN.md index 3302d4850e8255e8d2d6460c201892fd6035b260..79b698e2452306ac8ffaeb6bb88057f3c578db0f 100644 --- a/README_CN.md +++ b/README_CN.md @@ -39,14 +39,14 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务 ``` # 启动 CPU Docker -docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 -docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker pull hub.baidubce.com/paddlepaddle/serving:latest +docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest docker exec -it test bash ``` ``` # 启动 GPU Docker -nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu +nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu nvidia-docker exec -it test bash ``` ```shell diff --git a/doc/ABTEST_IN_PADDLE_SERVING.md b/doc/ABTEST_IN_PADDLE_SERVING.md index 69e5ff4b6fdf11d3764f94cba83beee82f959c85..f2302e611bc68607ed68f45f81cd833a91938ae6 100644 --- a/doc/ABTEST_IN_PADDLE_SERVING.md +++ b/doc/ABTEST_IN_PADDLE_SERVING.md @@ -39,7 +39,7 @@ Here, we [use docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/R First, start the BOW server, which enables the `8000` port: ``` shell -docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:latest docker exec -it bow-server bash pip install paddle-serving-server python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log & @@ -49,7 +49,7 @@ exit Similarly, start the LSTM server, which enables the `9000` port: ```bash -docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:latest docker exec -it lstm-server bash pip install paddle-serving-server python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log & diff --git a/doc/ABTEST_IN_PADDLE_SERVING_CN.md b/doc/ABTEST_IN_PADDLE_SERVING_CN.md index 1991c7e665aae97e36a690fcd4f96c4f85450cea..7ba4e5d7dbe643d87fc15e783afea2955b98fa1e 100644 --- a/doc/ABTEST_IN_PADDLE_SERVING_CN.md +++ b/doc/ABTEST_IN_PADDLE_SERVING_CN.md @@ -38,7 +38,7 @@ with open('test_data/part-0') as fin: 首先启动BOW Server,该服务启用`8000`端口: ```bash -docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:latest docker exec -it bow-server bash pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log & @@ -48,7 +48,7 @@ exit 同理启动LSTM Server,该服务启用`9000`端口: ```bash -docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:latest docker exec -it lstm-server bash pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log & diff --git a/doc/COMPILE.md b/doc/COMPILE.md index f61ac061883581090087a2202e694c9a07468c5f..411620af2ee10a769384c36cebc3aa3ecb93ea49 100644 --- a/doc/COMPILE.md +++ b/doc/COMPILE.md @@ -13,8 +13,8 @@ It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you: -- CPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) -- GPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) +- CPU: `hub.baidubce.com/paddlepaddle/serving:latest-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) +- GPU: `hub.baidubce.com/paddlepaddle/serving:latest-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake: diff --git a/doc/COMPILE_CN.md b/doc/COMPILE_CN.md index c6e5426f02335598277ceb40fafc5215c7f03b2b..44802260719d37a3140ca15f6a2ccc15479e32d6 100644 --- a/doc/COMPILE_CN.md +++ b/doc/COMPILE_CN.md @@ -13,8 +13,8 @@ 推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境: -- CPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) -- GPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) +- CPU: `hub.baidubce.com/paddlepaddle/serving:latest-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) +- GPU: `hub.baidubce.com/paddlepaddle/serving:latest-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) 本文档将以Python2为例介绍如何编译Paddle Serving。如果您想用Python3进行编译,只需要调整cmake的Python相关选项即可: diff --git a/doc/RUN_IN_DOCKER.md b/doc/RUN_IN_DOCKER.md index 327176297518ff65d788e3e59b23db27f1e7178c..32a4aae1fb2bf866fe250de0b4ed055a707c8fd0 100644 --- a/doc/RUN_IN_DOCKER.md +++ b/doc/RUN_IN_DOCKER.md @@ -17,7 +17,7 @@ You can get images in two ways: 1. Pull image directly ```bash - docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 + docker pull hub.baidubce.com/paddlepaddle/serving:latest ``` 2. Building image based on dockerfile @@ -25,13 +25,13 @@ You can get images in two ways: Create a new folder and copy [Dockerfile](../tools/Dockerfile) to this folder, and run the following command: ```bash - docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0 . + docker build -t hub.baidubce.com/paddlepaddle/serving:latest . ``` ### Create container ```bash -docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest docker exec -it test bash ``` @@ -109,7 +109,7 @@ You can also get images in two ways: 1. Pull image directly ```bash - nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu + nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu ``` 2. Building image based on dockerfile @@ -117,13 +117,13 @@ You can also get images in two ways: Create a new folder and copy [Dockerfile.gpu](../tools/Dockerfile.gpu) to this folder, and run the following command: ```bash - nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu . + nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:latest-gpu . ``` ### Create container ```bash -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu nvidia-docker exec -it test bash ``` diff --git a/doc/RUN_IN_DOCKER_CN.md b/doc/RUN_IN_DOCKER_CN.md index 4a995f9acf611c550e866ed12502734220a2e71c..b95344923605ade590b8bed509a2dd6f59640433 100644 --- a/doc/RUN_IN_DOCKER_CN.md +++ b/doc/RUN_IN_DOCKER_CN.md @@ -17,7 +17,7 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker) 1. 直接拉取镜像 ```bash - docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 + docker pull hub.baidubce.com/paddlepaddle/serving:latest ``` 2. 基于Dockerfile构建镜像 @@ -25,13 +25,13 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker) 建立新目录,复制[Dockerfile](../tools/Dockerfile)内容到该目录下Dockerfile文件。执行 ```bash - docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0 . + docker build -t hub.baidubce.com/paddlepaddle/serving:latest . ``` ### 创建容器并进入 ```bash -docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 +docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest docker exec -it test bash ``` @@ -107,7 +107,7 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版 1. 直接拉取镜像 ```bash - nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu + nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu ``` 2. 基于Dockerfile构建镜像 @@ -115,13 +115,13 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版 建立新目录,复制[Dockerfile.gpu](../tools/Dockerfile.gpu)内容到该目录下Dockerfile文件。执行 ```bash - nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu . + nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:latest-gpu . ``` ### 创建容器并进入 ```bash -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu nvidia-docker exec -it test bash ```