未验证 提交 1818f871 编写于 作者: B barriery 提交者: GitHub

Merge pull request #404 from barrierye/doc-fix

add desc of pip mirror source && fix docker images tag
......@@ -3,7 +3,7 @@
<img src='https://paddle-serving.bj.bcebos.com/imdb-demo%2FLogoMakr-3Bd2NM-300dpi.png' width = "600" height = "130">
<br>
<p>
<p align="center">
<br>
<a href="https://travis-ci.com/PaddlePaddle/Serving">
......@@ -41,6 +41,8 @@ pip install paddle-serving-client
pip install paddle-serving-server
```
You may need to use a domestic mirror source (in China, you can use the Tsinghua mirror source) to speed up the download.
<h2 align="center">Quick Start Example</h2>
### Boston House Price Prediction model
......@@ -146,7 +148,7 @@ python image_classification_service_demo.py resnet50_serving_model
<img src='https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg' width = "200" height = "200">
<br>
<p>
``` shell
curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg", "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
```
......
......@@ -3,7 +3,7 @@
<img src='https://paddle-serving.bj.bcebos.com/imdb-demo%2FLogoMakr-3Bd2NM-300dpi.png' width = "600" height = "130">
<br>
<p>
<p align="center">
<br>
<a href="https://travis-ci.com/PaddlePaddle/Serving">
......@@ -42,6 +42,8 @@ pip install paddle-serving-client
pip install paddle-serving-server
```
您可能需要使用国内镜像源(例如清华源)来加速下载。
<h2 align="center">快速启动示例</h2>
<h3 align="center">波士顿房价预测</h3>
......@@ -151,7 +153,7 @@ python image_classification_service_demo.py resnet50_serving_model
<img src='https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg' width = "200" height = "200">
<br>
<p>
``` shell
curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg", "fetch": ["score"]}' http://127.0.0.1:9292/image/prediction
```
......
......@@ -38,7 +38,7 @@ Here, we [use docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/R
First, start the BOW server, which enables the `8000` port:
``` shell
docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.1.3
docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.2.0
docker exec -it bow-server bash
pip install paddle-serving-server
python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log &
......@@ -48,7 +48,7 @@ exit
Similarly, start the LSTM server, which enables the `9000` port:
```bash
docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.1.3
docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.2.0
docker exec -it lstm-server bash
pip install paddle-serving-server
python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log &
......
......@@ -38,9 +38,9 @@ with open('test_data/part-0') as fin:
首先启动BOW Server,该服务启用`8000`端口:
```bash
docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.1.3
docker run -dit -v $PWD/imdb_bow_model:/model -p 8000:8000 --name bow-server hub.baidubce.com/paddlepaddle/serving:0.2.0
docker exec -it bow-server bash
pip install paddle-serving-server
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m paddle_serving_server.serve --model model --port 8000 >std.log 2>err.log &
exit
```
......@@ -48,9 +48,9 @@ exit
同理启动LSTM Server,该服务启用`9000`端口:
```bash
docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.1.3
docker run -dit -v $PWD/imdb_lstm_model:/model -p 9000:9000 --name lstm-server hub.baidubce.com/paddlepaddle/serving:0.2.0
docker exec -it lstm-server bash
pip install paddle-serving-server
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m paddle_serving_server.serve --model model --port 9000 >std.log 2>err.log &
exit
```
......
......@@ -11,7 +11,7 @@
```shell
go get github.com/PaddlePaddle/Serving/go/serving_client
go get github.com/PaddlePaddle/Serving/go/proto
pip install paddle-serving-server
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
```
### 下载文本分类模型
......
......@@ -15,7 +15,7 @@ You can get images in two ways:
1. Pull image directly
```bash
docker pull hub.baidubce.com/paddlepaddle/serving:0.1.3
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0
```
2. Building image based on dockerfile
......@@ -23,13 +23,13 @@ You can get images in two ways:
Create a new folder and copy [Dockerfile](../tools/Dockerfile) to this folder, and run the following command:
```bash
docker build -t hub.baidubce.com/paddlepaddle/serving:0.1.3 .
docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0 .
```
### Create container
```bash
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.1.3
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0
docker exec -it test bash
```
......@@ -37,12 +37,18 @@ The `-p` option is to map the `9292` port of the container to the `9292` port of
### Install PaddleServing
In order to make the image smaller, the PaddleServing package is not installed in the image. You can run the following command to install it
In order to make the image smaller, the PaddleServing package is not installed in the image. You can run the following command to install it:
```bash
pip install paddle-serving-server
```
You may need to use a domestic mirror source (in China, you can use the Tsinghua mirror source of the following example) to speed up the download:
```shell
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
```
### Test example
Before running the GPU version of the Server side code, you need to set the `CUDA_VISIBLE_DEVICES` environment variable to specify which GPUs the prediction service uses. The following example specifies two GPUs with indexes 0 and 1:
......@@ -107,7 +113,7 @@ You can also get images in two ways:
1. Pull image directly
```bash
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.1.3-gpu
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
```
2. Building image based on dockerfile
......@@ -115,13 +121,13 @@ You can also get images in two ways:
Create a new folder and copy [Dockerfile.gpu](../tools/Dockerfile.gpu) to this folder, and run the following command:
```bash
nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.1.3-gpu .
nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu .
```
### Create container
```bash
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.1.3-gpu
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
nvidia-docker exec -it test bash
```
......@@ -135,6 +141,12 @@ In order to make the image smaller, the PaddleServing package is not installed i
pip install paddle-serving-server-gpu
```
You may need to use a domestic mirror source (in China, you can use the Tsinghua mirror source of the following example) to speed up the download:
```shell
pip install paddle-serving-server-gpu -i https://pypi.tuna.tsinghua.edu.cn/simple
```
### Test example
When running the GPU Server, you need to set the GPUs used by the prediction service through the `--gpu_ids` option, and the CPU is used by default. An error will be reported when the value of `--gpu_ids` exceeds the environment variable `CUDA_VISIBLE_DEVICES`. The following example specifies to use a GPU with index 0:
......
......@@ -15,7 +15,7 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker)
1. 直接拉取镜像
```bash
docker pull hub.baidubce.com/paddlepaddle/serving:0.1.3
docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0
```
2. 基于Dockerfile构建镜像
......@@ -23,13 +23,13 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker)
建立新目录,复制[Dockerfile](../tools/Dockerfile)内容到该目录下Dockerfile文件。执行
```bash
docker build -t hub.baidubce.com/paddlepaddle/serving:0.1.3 .
docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0 .
```
### 创建容器并进入
```bash
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.1.3
docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0
docker exec -it test bash
```
......@@ -37,12 +37,18 @@ docker exec -it test bash
### 安装PaddleServing
为了减小镜像的体积,镜像中没有安装Serving包,要执行下面命令进行安装
为了减小镜像的体积,镜像中没有安装Serving包,要执行下面命令进行安装
```bash
pip install paddle-serving-server
```
您可能需要使用国内镜像源(例如清华源)来加速下载。
```shell
pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple
```
### 测试example
通过下面命令获取训练好的Boston房价预估模型:
......@@ -99,7 +105,7 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版
1. 直接拉取镜像
```bash
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.1.3-gpu
nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
```
2. 基于Dockerfile构建镜像
......@@ -107,13 +113,13 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版
建立新目录,复制[Dockerfile.gpu](../tools/Dockerfile.gpu)内容到该目录下Dockerfile文件。执行
```bash
nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.1.3-gpu .
nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu .
```
### 创建容器并进入
```bash
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.1.3-gpu
nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:0.2.0-gpu
nvidia-docker exec -it test bash
```
......@@ -121,12 +127,18 @@ nvidia-docker exec -it test bash
### 安装PaddleServing
为了减小镜像的体积,镜像中没有安装Serving包,要执行下面命令进行安装
为了减小镜像的体积,镜像中没有安装Serving包,要执行下面命令进行安装
```bash
pip install paddle-serving-server-gpu
```
您可能需要使用国内镜像源(例如清华源)来加速下载。
```shell
pip install paddle-serving-server-gpu -i https://pypi.tuna.tsinghua.edu.cn/simple
```
### 测试example
在运行GPU版Server时需要通过`--gpu_ids`选项设置预测服务使用的GPU,缺省状态默认使用CPU。当设置的`--gpu_ids`超出环境变量`CUDA_VISIBLE_DEVICES`时会报错。下面的示例为指定使用索引为0的GPU:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册