diff --git a/README.md b/README.md index 34e22a726282f9167e70a30cb792335ad171b34f..17df4782ef3494e1eb4cfa879cb575088e48d2c3 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,7 @@

+


@@ -29,7 +30,7 @@ We consider deploying deep learning inference service online to be a user-facing

Installation

-We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md) +We **highly recommend** you to **run Paddle Serving in Docker**, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md). See the [document](doc/DOCKER_IMAGES.md) for more docker images. ``` # Run CPU Docker docker pull hub.baidubce.com/paddlepaddle/serving:latest @@ -38,8 +39,8 @@ docker exec -it test bash ``` ``` # Run GPU Docker -nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu +nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 nvidia-docker exec -it test bash ``` @@ -75,7 +76,7 @@ Packages of Paddle Serving support Centos 6/7 and Ubuntu 16/18, or you can use H

- + ``` shell > python -m paddle_serving_app.package --get_model resnet_v2_50_imagenet > tar -xzf resnet_v2_50_imagenet.tar.gz diff --git a/README_CN.md b/README_CN.md index 7a42e6cd9c02fa6c51cba7a3228cd0916dd64de2..f954877b08ed793dd641f7541ff2717feac2070f 100644 --- a/README_CN.md +++ b/README_CN.md @@ -7,6 +7,7 @@

+


@@ -31,7 +32,7 @@ Paddle Serving 旨在帮助深度学习开发者轻易部署在线预测服务

安装

-**强烈建议**您在**Docker内构建**Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md) +**强烈建议**您在**Docker内构建**Paddle Serving,请查看[如何在Docker中运行PaddleServing](doc/RUN_IN_DOCKER_CN.md)。更多镜像请查看[Docker镜像列表](doc/DOCKER_IMAGES_CN.md)。 ``` # 启动 CPU Docker @@ -41,8 +42,8 @@ docker exec -it test bash ``` ``` # 启动 GPU Docker -nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu +nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 nvidia-docker exec -it test bash ``` ```shell @@ -76,7 +77,7 @@ Paddle Serving安装包支持Centos 6/7和Ubuntu 16/18,或者您可以使用HT

- + ``` shell > python -m paddle_serving_app.package --get_model resnet_v2_50_imagenet > tar -xzf resnet_v2_50_imagenet.tar.gz diff --git a/doc/COMPILE.md b/doc/COMPILE.md index 734d32d8ff60aee69c4267cfa4b00e96514bf389..46ebfb4f1a882b6645cb1e9bb6155743e520951d 100644 --- a/doc/COMPILE.md +++ b/doc/COMPILE.md @@ -11,10 +11,7 @@ - CMake:3.2.2 and later - Python:2.7.2 and later / 3.6 and later -It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you: - -- CPU: `hub.baidubce.com/paddlepaddle/serving:latest-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) -- GPU: `hub.baidubce.com/paddlepaddle/serving:latest-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) +It is recommended to use Docker for compilation. We have prepared the Paddle Serving compilation environment for you, see [this document](DOCKER_IMAGES.md). This document will take Python2 as an example to show how to compile Paddle Serving. If you want to compile with Python3, just adjust the Python options of cmake: @@ -29,6 +26,9 @@ git clone https://github.com/PaddlePaddle/Serving cd Serving && git submodule update --init --recursive ``` + + + ## PYTHONROOT Setting ```shell @@ -38,6 +38,18 @@ export PYTHONROOT=/usr/ In the default centos7 image we provide, the Python path is `/usr/bin/python`. If you want to use our centos6 image, you need to set it to `export PYTHONROOT=/usr/local/python2.7/`. + + +## Install Python dependencies + +```shell +pip install -r python/requirements.txt +``` + +If Python3 is used, replace `pip` with `pip3`. + + + ## Compile Server ### Integrated CPU version paddle inference library @@ -62,6 +74,8 @@ execute `make install` to put targets under directory `./output` **Attention:** After the compilation is successful, you need to set the path of `SERVING_BIN`. See [Note](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE.md#Note) for details. + + ## Compile Client ``` shell @@ -72,6 +86,8 @@ make -j10 execute `make install` to put targets under directory `./output` + + ## Compile the App ```bash @@ -80,15 +96,20 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY make ``` + + ## Install wheel package Regardless of the client, server or App part, after compiling, install the whl package under `python/dist/`. + + ## Note When running the python server, it will check the `SERVING_BIN` environment variable. If you want to use your own compiled binary file, set the environment variable to the path of the corresponding binary file, usually`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`. + ## CMake Option Description | Compile Options | Description | Default | diff --git a/doc/COMPILE_CN.md b/doc/COMPILE_CN.md index 1d5d60bdff34a2561ca830faf8fe3404a4a9fd96..54f80d54d334835600d08846dc0fb42efe6558ee 100644 --- a/doc/COMPILE_CN.md +++ b/doc/COMPILE_CN.md @@ -11,10 +11,7 @@ - CMake:3.2.2及以上 - Python:2.7.2及以上 / 3.6及以上 -推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境: - -- CPU: `hub.baidubce.com/paddlepaddle/serving:latest-devel`,dockerfile: [Dockerfile.devel](../tools/Dockerfile.devel) -- GPU: `hub.baidubce.com/paddlepaddle/serving:latest-gpu-devel`,dockerfile: [Dockerfile.gpu.devel](../tools/Dockerfile.gpu.devel) +推荐使用Docker编译,我们已经为您准备好了Paddle Serving编译环境,详见[该文档](DOCKER_IMAGES_CN.md)。 本文档将以Python2为例介绍如何编译Paddle Serving。如果您想用Python3进行编译,只需要调整cmake的Python相关选项即可: @@ -29,6 +26,9 @@ git clone https://github.com/PaddlePaddle/Serving cd Serving && git submodule update --init --recursive ``` + + + ## PYTHONROOT设置 ```shell @@ -38,6 +38,18 @@ export PYTHONROOT=/usr/ 我们提供默认Centos7的Python路径为`/usr/bin/python`,如果您要使用我们的Centos6镜像,需要将其设置为`export PYTHONROOT=/usr/local/python2.7/`。 + + +## 安装Python依赖 + +```shell +pip install -r python/requirements.txt +``` + +如果使用 Python3,请以 `pip3` 替换 `pip`。 + + + ## 编译Server部分 ### 集成CPU版本Paddle Inference Library @@ -62,6 +74,8 @@ make -j10 **注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE_CN.md#注意事项)。 + + ## 编译Client部分 ``` shell @@ -72,6 +86,8 @@ make -j10 执行`make install`可以把目标产出放在`./output`目录下。 + + ## 编译App部分 ```bash @@ -80,14 +96,20 @@ cmake -DPYTHON_INCLUDE_DIR=$PYTHONROOT/include/python2.7/ -DPYTHON_LIBRARIES=$PY make ``` + + ## 安装wheel包 无论是Client端,Server端还是App部分,编译完成后,安装`python/dist/`下的whl包即可。 + + ## 注意事项 运行python端Server时,会检查`SERVING_BIN`环境变量,如果想使用自己编译的二进制文件,请将设置该环境变量为对应二进制文件的路径,通常是`export SERVING_BIN=${BUILD_DIR}/core/general-server/serving`。 + + ## CMake选项说明 | 编译选项 | 说明 | 默认 | diff --git a/doc/DOCKER_IMAGES.md b/doc/DOCKER_IMAGES.md new file mode 100644 index 0000000000000000000000000000000000000000..47a300eabc85689f9bce7c46c353b35b85db9376 --- /dev/null +++ b/doc/DOCKER_IMAGES.md @@ -0,0 +1,42 @@ +# Docker Images + +([简体中文](DOCKER_IMAGES_CN.md)|English) + +This document maintains a list of docker images provided by Paddle Serving. + +## Get docker image + +You can get images in two ways: + +1. Pull image directly from `hub.baidubce.com ` or `docker.io` through TAG: + + ```shell + docker pull hub.baidubce.com/paddlepaddle/serving: # hub.baidubce.com + docker pull paddlepaddle/serving: # hub.docker.com + ``` + +2. Building image based on dockerfile + + Create a new folder and copy Dockerfile to this folder, and run the following command: + + ```shell + docker build -t : . + ``` + + + + +## Image description + +Runtime images cannot be used for compilation. + +| Description | OS | TAG | Dockerfile | +| :----------------------------------------------------------: | :-----: | :--------------------------: | :----------------------------------------------------------: | +| CPU runtime | CentOS7 | latest | [Dockerfile](../tools/Dockerfile) | +| CPU development | CentOS7 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) | +| GPU (cuda9.0-cudnn7) runtime | CentOS7 | latest-cuda9.0-cudnn7 | [Dockerfile.cuda9.0-cudnn7](../tools/Dockerfile.cuda9.0-cudnn7) | +| GPU (cuda9.0-cudnn7) development | CentOS7 | latest-cuda9.0-cudnn7-devel | [Dockerfile.cuda9.0-cudnn7.devel](../tools/Dockerfile.cuda9.0-cudnn7.devel) | +| GPU (cuda10.0-cudnn7) runtime | CentOS7 | latest-cuda10.0-cudnn7 | [Dockerfile.cuda10.0-cudnn7](../tools/Dockerfile.cuda10.0-cudnn7) | +| GPU (cuda10.0-cudnn7) development | CentOS7 | latest-cuda10.0-cudnn7-devel | [Dockerfile.cuda10.0-cudnn7.devel](../tools/Dockerfile.cuda10.0-cudnn7.devel) | +| CPU development (Used to compile packages on Ubuntu) | CentOS6 | | [Dockerfile.centos6.devel](../tools/Dockerfile.centos6.devel) | +| GPU (cuda9.0-cudnn7) development (Used to compile packages on Ubuntu) | CentOS6 | | [Dockerfile.centos6.cuda9.0-cudnn7.devel](../tools/Dockerfile.centos6.cuda9.0-cudnn7.devel) | diff --git a/doc/DOCKER_IMAGES_CN.md b/doc/DOCKER_IMAGES_CN.md new file mode 100644 index 0000000000000000000000000000000000000000..26ef5e8bd8c23a281604e5ff0319416c3e408472 --- /dev/null +++ b/doc/DOCKER_IMAGES_CN.md @@ -0,0 +1,42 @@ +# Docker 镜像 + +(简体中文|[English](DOCKER_IMAGES.md)) + +该文档维护了 Paddle Serving 提供的镜像列表。 + +## 获取镜像 + +您可以通过两种方式获取镜像。 + +1. 通过 TAG 直接从 `hub.baidubce.com ` 或 `docker.io` 拉取镜像: + + ```shell + docker pull hub.baidubce.com/paddlepaddle/serving: # hub.baidubce.com + docker pull paddlepaddle/serving: # hub.docker.com + ``` + +2. 基于 Dockerfile 构建镜像 + + 建立新目录,复制对应 Dockerfile 内容到该目录下 Dockerfile 文件。执行 + + ```shell + docker build -t : . + ``` + + + + +## 镜像说明 + +运行时镜像不能用于开发编译。 + +| 镜像说明 | 操作系统 | TAG | Dockerfile | +| -------------------------------------------------- | -------- | ---------------------------- | ------------------------------------------------------------ | +| CPU 运行镜像 | CentOS7 | latest | [Dockerfile](../tools/Dockerfile) | +| CPU 开发镜像 | CentOS7 | latest-devel | [Dockerfile.devel](../tools/Dockerfile.devel) | +| GPU (cuda9.0-cudnn7) 运行镜像 | CentOS7 | latest-cuda9.0-cudnn7 | [Dockerfile.cuda9.0-cudnn7](../tools/Dockerfile.cuda9.0-cudnn7) | +| GPU (cuda9.0-cudnn7) 开发镜像 | CentOS7 | latest-cuda9.0-cudnn7-devel | [Dockerfile.cuda9.0-cudnn7.devel](../tools/Dockerfile.cuda9.0-cudnn7.devel) | +| GPU (cuda10.0-cudnn7) 运行镜像 | CentOS7 | latest-cuda10.0-cudnn7 | [Dockerfile.cuda10.0-cudnn7](../tools/Dockerfile.cuda10.0-cudnn7) | +| GPU (cuda10.0-cudnn7) 开发镜像 | CentOS7 | latest-cuda10.0-cudnn7-devel | [Dockerfile.cuda10.0-cudnn7.devel](../tools/Dockerfile.cuda10.0-cudnn7.devel) | +| CPU 开发镜像 (用于编译 Ubuntu 包) | CentOS6 | <无> | [Dockerfile.centos6.devel](../tools/Dockerfile.centos6.devel) | +| GPU (cuda9.0-cudnn7) 开发镜像 (用于编译 Ubuntu 包) | CentOS6 | <无> | [Dockerfile.centos6.cuda9.0-cudnn7.devel](../tools/Dockerfile.centos6.cuda9.0-cudnn7.devel) | diff --git a/doc/RUN_IN_DOCKER.md b/doc/RUN_IN_DOCKER.md index 32a4aae1fb2bf866fe250de0b4ed055a707c8fd0..466a689f3794a78f140517a28e2a758c3149735e 100644 --- a/doc/RUN_IN_DOCKER.md +++ b/doc/RUN_IN_DOCKER.md @@ -12,21 +12,12 @@ This document takes Python2 as an example to show how to run Paddle Serving in d ### Get docker image -You can get images in two ways: +Refer to [this document](DOCKER_IMAGES.md) for a docker image: -1. Pull image directly - - ```bash - docker pull hub.baidubce.com/paddlepaddle/serving:latest - ``` - -2. Building image based on dockerfile - - Create a new folder and copy [Dockerfile](../tools/Dockerfile) to this folder, and run the following command: +```shell +docker pull hub.baidubce.com/paddlepaddle/serving:latest +``` - ```bash - docker build -t hub.baidubce.com/paddlepaddle/serving:latest . - ``` ### Create container @@ -104,26 +95,16 @@ The GPU version is basically the same as the CPU version, with only some differe ### Get docker image -You can also get images in two ways: - -1. Pull image directly +Refer to [this document](DOCKER_IMAGES.md) for a docker image, the following is an example of an `cuda9.0-cudnn7` image: - ```bash - nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu - ``` - -2. Building image based on dockerfile - - Create a new folder and copy [Dockerfile.gpu](../tools/Dockerfile.gpu) to this folder, and run the following command: - - ```bash - nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:latest-gpu . - ``` +```shell +nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 +``` ### Create container ```bash -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 nvidia-docker exec -it test bash ``` @@ -200,4 +181,4 @@ tar -xzf uci_housing.tar.gz ## Attention -The images provided by this document are all runtime images, which do not support compilation. If you want to compile from source, refer to [COMPILE](COMPILE.md). +Runtime images cannot be used for compilation. If you want to compile from source, refer to [COMPILE](COMPILE.md). diff --git a/doc/RUN_IN_DOCKER_CN.md b/doc/RUN_IN_DOCKER_CN.md index b95344923605ade590b8bed509a2dd6f59640433..cc800820c7d602454ce180c7344c092a458bca1b 100644 --- a/doc/RUN_IN_DOCKER_CN.md +++ b/doc/RUN_IN_DOCKER_CN.md @@ -12,21 +12,12 @@ Docker(GPU版本需要在GPU机器上安装nvidia-docker) ### 获取镜像 -可以通过两种方式获取镜像。 +参考[该文档](DOCKER_IMAGES_CN.md)获取镜像: -1. 直接拉取镜像 - - ```bash - docker pull hub.baidubce.com/paddlepaddle/serving:latest - ``` - -2. 基于Dockerfile构建镜像 - - 建立新目录,复制[Dockerfile](../tools/Dockerfile)内容到该目录下Dockerfile文件。执行 +```shell +docker pull hub.baidubce.com/paddlepaddle/serving:latest +``` - ```bash - docker build -t hub.baidubce.com/paddlepaddle/serving:latest . - ``` ### 创建容器并进入 @@ -102,26 +93,16 @@ GPU版本与CPU版本基本一致,只有部分接口命名的差别(GPU版 ### 获取镜像 -可以通过两种方式获取镜像。 - -1. 直接拉取镜像 +参考[该文档](DOCKER_IMAGES_CN.md)获取镜像,这里以 `cuda9.0-cudnn7` 的镜像为例: - ```bash - nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-gpu - ``` - -2. 基于Dockerfile构建镜像 - - 建立新目录,复制[Dockerfile.gpu](../tools/Dockerfile.gpu)内容到该目录下Dockerfile文件。执行 - - ```bash - nvidia-docker build -t hub.baidubce.com/paddlepaddle/serving:latest-gpu . - ``` +```shell +nvidia-docker pull hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 +``` ### 创建容器并进入 ```bash -nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-gpu +nvidia-docker run -p 9292:9292 --name test -dit hub.baidubce.com/paddlepaddle/serving:latest-cuda9.0-cudnn7 nvidia-docker exec -it test bash ``` @@ -195,4 +176,4 @@ tar -xzf uci_housing.tar.gz ## 注意事项 -该文档提供的镜像均为运行镜像,不支持开发编译。如果想要从源码编译,请查看[如何编译PaddleServing](COMPILE.md)。 +运行时镜像不能用于开发编译。如果想要从源码编译,请查看[如何编译PaddleServing](COMPILE.md)。 diff --git a/tools/Dockerfile.centos6.gpu.devel b/tools/Dockerfile.centos6.cuda9.0-cudnn7.devel similarity index 100% rename from tools/Dockerfile.centos6.gpu.devel rename to tools/Dockerfile.centos6.cuda9.0-cudnn7.devel diff --git a/tools/Dockerfile.cuda10.0-cudnn7 b/tools/Dockerfile.cuda10.0-cudnn7 new file mode 100644 index 0000000000000000000000000000000000000000..d2a5b2c93a3e78b807c7828c984a5fc29f50fd2d --- /dev/null +++ b/tools/Dockerfile.cuda10.0-cudnn7 @@ -0,0 +1,23 @@ +FROM nvidia/cuda:10.0-cudnn7-devel-centos7 as builder + +FROM nvidia/cuda:10.0-cudnn7-runtime-centos7 +RUN yum -y install wget && \ + yum -y install epel-release && yum -y install patchelf && \ + yum -y install gcc gcc-c++ make python-devel && \ + yum -y install libSM-1.2.2-2.el7.x86_64 --setopt=protected_multilib=false && \ + yum -y install libXrender-0.9.10-1.el7.x86_64 --setopt=protected_multilib=false && \ + yum -y install libXext-1.3.3-3.el7.x86_64 --setopt=protected_multilib=false && \ + yum -y install python3 python3-devel && \ + yum clean all + +RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \ + python get-pip.py && rm get-pip.py + +RUN ln -s /usr/local/cuda-10.0/lib64/libcublas.so.10.0 /usr/local/cuda-10.0/lib64/libcublas.so && \ + echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> /root/.bashrc && \ + ln -s /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so && \ + echo 'export LD_LIBRARY_PATH=/usr/local/cuda-10.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH' >> /root/.bashrc && \ + echo "export LANG=en_US.utf8" >> /root/.bashrc && \ + mkdir -p /usr/local/cuda/extras + +COPY --from=builder /usr/local/cuda/extras/CUPTI /usr/local/cuda/extras/CUPTI diff --git a/tools/Dockerfile.cuda10.0-cudnn7.devel b/tools/Dockerfile.cuda10.0-cudnn7.devel new file mode 100644 index 0000000000000000000000000000000000000000..8021ef31f05622cec6fb3aff681feb5107d2be2c --- /dev/null +++ b/tools/Dockerfile.cuda10.0-cudnn7.devel @@ -0,0 +1,35 @@ +FROM nvidia/cuda:10.0-cudnn7-runtime-centos7 + +RUN yum -y install wget >/dev/null \ + && yum -y install gcc gcc-c++ make glibc-static which \ + && yum -y install git openssl-devel curl-devel bzip2-devel python-devel \ + && yum -y install libSM-1.2.2-2.el7.x86_64 --setopt=protected_multilib=false \ + && yum -y install libXrender-0.9.10-1.el7.x86_64 --setopt=protected_multilib=false \ + && yum -y install libXext-1.3.3-3.el7.x86_64 --setopt=protected_multilib=false + +RUN wget https://cmake.org/files/v3.2/cmake-3.2.0-Linux-x86_64.tar.gz >/dev/null \ + && tar xzf cmake-3.2.0-Linux-x86_64.tar.gz \ + && mv cmake-3.2.0-Linux-x86_64 /usr/local/cmake3.2.0 \ + && echo 'export PATH=/usr/local/cmake3.2.0/bin:$PATH' >> /root/.bashrc \ + && rm cmake-3.2.0-Linux-x86_64.tar.gz + +RUN wget https://dl.google.com/go/go1.14.linux-amd64.tar.gz >/dev/null \ + && tar xzf go1.14.linux-amd64.tar.gz \ + && mv go /usr/local/go \ + && echo 'export GOROOT=/usr/local/go' >> /root/.bashrc \ + && echo 'export PATH=/usr/local/go/bin:$PATH' >> /root/.bashrc \ + && rm go1.14.linux-amd64.tar.gz + +RUN yum -y install python-devel sqlite-devel \ + && curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py >/dev/null \ + && python get-pip.py >/dev/null \ + && pip install google protobuf setuptools wheel flask >/dev/null \ + && rm get-pip.py + +RUN yum install -y python3 python3-devel \ + && pip3 install google protobuf setuptools wheel flask \ + && yum -y install epel-release && yum -y install patchelf libXext libSM libXrender\ + && yum clean all + +RUN localedef -c -i en_US -f UTF-8 en_US.UTF-8 \ + && echo "export LANG=en_US.utf8" >> /root/.bashrc diff --git a/tools/Dockerfile.gpu b/tools/Dockerfile.cuda9.0-cudnn7 similarity index 100% rename from tools/Dockerfile.gpu rename to tools/Dockerfile.cuda9.0-cudnn7 diff --git a/tools/Dockerfile.gpu.devel b/tools/Dockerfile.cuda9.0-cudnn7.devel similarity index 100% rename from tools/Dockerfile.gpu.devel rename to tools/Dockerfile.cuda9.0-cudnn7.devel