未验证 提交 6bf6ee5b 编写于 作者: D Dong Daxiang 提交者: GitHub

Merge pull request #609 from barrierye/update-images-tag

update images tag
...@@ -37,14 +37,14 @@ We consider deploying deep learning inference service online to be a user-facing ...@@ -37,14 +37,14 @@ We consider deploying deep learning inference service online to be a user-facing
We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md) We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)
``` ```
# Run CPU Docker # Run CPU Docker
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 docker pull hub.baidubce.com/paddlepaddle/serving:latest
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it test bash docker exec -it test bash
``` ```
``` ```
# Run GPU Docker # Run GPU Docker
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker exec -it test bash nvidia-docker exec -it test bash
``` ```
......
...@@ -39,14 +39,14 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务 ...@@ -39,14 +39,14 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务
``` ```
# 启动 CPU Docker # 启动 CPU Docker
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 docker pull hub.baidubce.com/paddlepaddle/serving:latest
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it test bash docker exec -it test bash
``` ```
``` ```
# 启动 GPU Docker # 启动 GPU Docker
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker exec -it test bash nvidia-docker exec -it test bash
``` ```
```shell ```shell
......
...@@ -39,7 +39,7 @@ Here, we [use docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/R ...@@ -39,7 +39,7 @@ Here, we [use docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/R
First, start the BOW server, which enables the `8000` port: First, start the BOW server, which enables the `8000` port:
``` shell ``` shell
docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it bow-server bash docker exec -it bow-server bash
pip install paddle-serving-server pip install paddle-serving-server
python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log & python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log &
...@@ -49,7 +49,7 @@ exit ...@@ -49,7 +49,7 @@ exit
Similarly, start the LSTM server, which enables the `9000` port: Similarly, start the LSTM server, which enables the `9000` port:
```bash ```bash
docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it lstm-server bash docker exec -it lstm-server bash
pip install paddle-serving-server pip install paddle-serving-server
python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log & python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log &
......
...@@ -38,7 +38,7 @@ with open('test_data/part-0') as fin: ...@@ -38,7 +38,7 @@ with open('test_data/part-0') as fin:
首先启动BOW Server,该服务启用`8000`端口: 首先启动BOW Server,该服务启用`8000`端口:
```bash ```bash
docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it bow-server bash docker exec -it bow-server bash
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log & python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log &
...@@ -48,7 +48,7 @@ exit ...@@ -48,7 +48,7 @@ exit
同理启动LSTM Server,该服务启用`9000`端口: 同理启动LSTM Server,该服务启用`9000`端口:
```bash ```bash
docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it lstm-server bash docker exec -it lstm-server bash
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log & python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log &
......
...@@ -13,8 +13,8 @@ ...@@ -13,8 +13,8 @@
It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you: It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you:
- CPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) - CPU: `hub.baidubce.com/paddlepaddle/serving:latest-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel)
- GPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) - GPU: `hub.baidubce.com/paddlepaddle/serving:latest-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake: This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake:
......
...@@ -13,8 +13,8 @@ ...@@ -13,8 +13,8 @@
推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境: 推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境:
- CPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) - CPU: `hub.baidubce.com/paddlepaddle/serving:latest-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel)
- GPU: `hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) - GPU: `hub.baidubce.com/paddlepaddle/serving:latest-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel)
本文档将以Python2为例介绍如何编译Paddle Serving。如果您想用Python3进行编译,只需要调整cmake的Python相关选项即可: 本文档将以Python2为例介绍如何编译Paddle Serving。如果您想用Python3进行编译,只需要调整cmake的Python相关选项即可:
......
...@@ -17,7 +17,7 @@ You can get images in two ways: ...@@ -17,7 +17,7 @@ You can get images in two ways:
1. Pull image directly 1. Pull image directly
```bash ```bash
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 docker pull hub.baidubce.com/paddlepaddle/serving:latest
``` ```
2. Building image based on dockerfile 2. Building image based on dockerfile
...@@ -25,13 +25,13 @@ You can get images in two ways: ...@@ -25,13 +25,13 @@ You can get images in two ways:
Create a new folder and copy [Dockerfile](../tools/Dockerfile) to this folder, and run the following command: Create a new folder and copy [Dockerfile](../tools/Dockerfile) to this folder, and run the following command:
```bash ```bash
docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0 . docker build -t hub.baidubce.com/paddlepaddle/serving:latest .
``` ```
### Create container ### Create container
```bash ```bash
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it test bash docker exec -it test bash
``` ```
...@@ -109,7 +109,7 @@ You can also get images in two ways: ...@@ -109,7 +109,7 @@ You can also get images in two ways:
1. Pull image directly 1. Pull image directly
```bash ```bash
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu
``` ```
2. Building image based on dockerfile 2. Building image based on dockerfile
...@@ -117,13 +117,13 @@ You can also get images in two ways: ...@@ -117,13 +117,13 @@ You can also get images in two ways:
Create a new folder and copy [Dockerfile.gpu](../tools/Dockerfile.gpu) to this folder, and run the following command: Create a new folder and copy [Dockerfile.gpu](../tools/Dockerfile.gpu) to this folder, and run the following command:
```bash ```bash
nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu . nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:latest-gpu .
``` ```
### Create container ### Create container
```bash ```bash
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker exec -it test bash nvidia-docker exec -it test bash
``` ```
......
...@@ -17,7 +17,7 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker) ...@@ -17,7 +17,7 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker)
1. 直接拉取镜像 1. 直接拉取镜像
```bash ```bash
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0 docker pull hub.baidubce.com/paddlepaddle/serving:latest
``` ```
2. 基于Dockerfile构建镜像 2. 基于Dockerfile构建镜像
...@@ -25,13 +25,13 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker) ...@@ -25,13 +25,13 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker)
建立新目录,复制[Dockerfile](../tools/Dockerfile)内容到该目录下Dockerfile文件。执行 建立新目录,复制[Dockerfile](../tools/Dockerfile)内容到该目录下Dockerfile文件。执行
```bash ```bash
docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0 . docker build -t hub.baidubce.com/paddlepaddle/serving:latest .
``` ```
### 创建容器并进入 ### 创建容器并进入
```bash ```bash
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0 docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest
docker exec -it test bash docker exec -it test bash
``` ```
...@@ -107,7 +107,7 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版 ...@@ -107,7 +107,7 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版
1. 直接拉取镜像 1. 直接拉取镜像
```bash ```bash
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu
``` ```
2. 基于Dockerfile构建镜像 2. 基于Dockerfile构建镜像
...@@ -115,13 +115,13 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版 ...@@ -115,13 +115,13 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版
建立新目录,复制[Dockerfile.gpu](../tools/Dockerfile.gpu)内容到该目录下Dockerfile文件。执行 建立新目录,复制[Dockerfile.gpu](../tools/Dockerfile.gpu)内容到该目录下Dockerfile文件。执行
```bash ```bash
nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu . nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:latest-gpu .
``` ```
### 创建容器并进入 ### 创建容器并进入
```bash ```bash
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu
nvidia-docker exec -it test bash nvidia-docker exec -it test bash
``` ```
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册