From 484db2fd0ca509a345a703a413f7ab7c0943730f Mon Sep 17 00:00:00 2001 From: barrierye Date: Tue, 31 Mar 2020 21:24:33 +0800 Subject: [PATCH] update doc --- doc/RUN_IN_DOCKER.md | 17 ++++++----------- doc/RUN_IN_DOCKER_CN.md | 15 +++++---------- 2 files changed, 11 insertions(+), 21 deletions(-) diff --git a/doc/RUN_IN_DOCKER.md b/doc/RUN_IN_DOCKER.md index 59e70c8b..48d1d265 100644 --- a/doc/RUN_IN_DOCKER.md +++ b/doc/RUN_IN_DOCKER.md @@ -1,6 +1,6 @@ # How to run PaddleServing in Docker -([简体中文](./RUN_IN_DOCKER_CN.md)|English) +([简体中文](RUN_IN_DOCKER_CN.md)|English) ## Requirements @@ -137,16 +137,11 @@ pip install paddle-serving-server-gpu ### Test example -When running the GPU Server, you need to set the GPUs used by the prediction service. By default, CPU version is used. You can configure it in two ways: - -1. Using the `CUDA_VISIBLE_DEVICES` environment variable, the following example specifies two GPUs with an index of 0 and 1: - - ```shell - export CUDA_VISIBLE_DEVICES=0,1 - ``` - -2. Using the `--gpu_ids` option, which will overrides the configuration of `CUDA_VISIBLE_DEVICES`. - +When running the GPU Server, you need to set the GPUs used by the prediction service through the `--gpu_ids` option, and the CPU is used by default. An error will be reported when the value of `--gpu_ids` exceeds the environment variable `CUDA_VISIBLE_DEVICES`. The following example specifies to use a GPU with index 0: +```shell +export CUDA_VISIBLE_DEVICES=0,1 +python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --gpu_ids 0 +``` Get the trained Boston house price prediction model by the following command: diff --git a/doc/RUN_IN_DOCKER_CN.md b/doc/RUN_IN_DOCKER_CN.md index 3aa10d25..8800b3a3 100644 --- a/doc/RUN_IN_DOCKER_CN.md +++ b/doc/RUN_IN_DOCKER_CN.md @@ -129,16 +129,11 @@ pip install paddle-serving-server-gpu ### 测试example -在运行GPU版Server时需要设置预测服务使用的GPU,缺省状态默认使用CPU。可以通过下面两种方式进行配置: - -1. 使用`CUDA_VISIBLE_DEVICES`环境变量,下面的示例为指定索引为0和1两块GPU: - - ```shell - export CUDA_VISIBLE_DEVICES=0,1 - ``` - -2. 使用`--gpu_ids`选项,该选项将覆盖`CUDA_VISIBLE_DEVICES`的配置。 - +在运行GPU版Server时需要通过`--gpu_ids`选项设置预测服务使用的GPU,缺省状态默认使用CPU。当设置的`--gpu_ids`超出环境变量`CUDA_VISIBLE_DEVICES`时会报错。下面的示例为指定使用索引为0的GPU: +```shell +export CUDA_VISIBLE_DEVICES=0,1 +python -m paddle_serving_server_gpu.serve --model uci_housing_model --port 9292 --gpu_ids 0 +``` 通过下面命令获取训练好的Boston房价预估模型: -- GitLab