提交 1b33979b 编写于 作者: B barrierye

update doc

上级 cea76521
# How to run PaddleServing in Docker
([简体中文](./RUN_IN_DOCKER_CN.md)|English)
([简体中文](RUN_IN_DOCKER_CN.md)|English)
## Requirements
......@@ -137,16 +137,11 @@ pip install paddle-serving-server-gpu
### Test example
When running the GPU Server, you need to set the GPUs used by the prediction service. By default, CPU version is used. You can configure it in two ways:
1. Using the `CUDA_VISIBLE_DEVICES` environment variable, the following example specifies two GPUs with an index of 0 and 1:
```shell
export CUDA_VISIBLE_DEVICES=0,1
```
2. Using the `--gpu_ids` option, which will overrides the configuration of `CUDA_VISIBLE_DEVICES`.
When running the GPU Server, you need to set the GPUs used by the prediction service through the `--gpu_ids` option, and the CPU is used by default. An error will be reported when the value of `--gpu_ids` exceeds the environment variable `CUDA_VISIBLE_DEVICES`. The following example specifies to use a GPU with index 0:
```shell
export CUDA_VISIBLE_DEVICES=0,1
python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --gpu_ids 0
```
Get the trained Boston house price prediction model by the following command:
......
......@@ -129,16 +129,11 @@ pip install paddle-serving-server-gpu
### 测试example
在运行GPU版Server时需要设置预测服务使用的GPU,缺省状态默认使用CPU。可以通过下面两种方式进行配置:
1. 使用`CUDA_VISIBLE_DEVICES`环境变量,下面的示例为指定索引为0和1两块GPU:
```shell
export CUDA_VISIBLE_DEVICES=0,1
```
2. 使用`--gpu_ids`选项,该选项将覆盖`CUDA_VISIBLE_DEVICES`的配置。
在运行GPU版Server时需要通过`--gpu_ids`选项设置预测服务使用的GPU,缺省状态默认使用CPU。当设置的`--gpu_ids`超出环境变量`CUDA_VISIBLE_DEVICES`时会报错。下面的示例为指定使用索引为0的GPU:
```shell
export CUDA_VISIBLE_DEVICES=0,1
python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --gpu_ids 0
```
通过下面命令获取训练好的Boston房价预估模型:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册