提交 5262c449 编写于 作者: L Luo Tao

Merge branch 'develop' into stride

...@@ -221,3 +221,7 @@ ENDIF(PYTHONLIBS_FOUND AND PYTHONINTERP_FOUND) ...@@ -221,3 +221,7 @@ ENDIF(PYTHONLIBS_FOUND AND PYTHONINTERP_FOUND)
INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_DIR}) INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_DIR})
INCLUDE_DIRECTORIES(${PYTHON_NUMPY_INCLUDE_DIR}) INCLUDE_DIRECTORIES(${PYTHON_NUMPY_INCLUDE_DIR})
IF(NOT WITH_PYTHON)
SET(PYTHON_LIBRARIES "")
ENDIF()
...@@ -4,42 +4,76 @@ PaddlePaddle的Docker容器使用方式 ...@@ -4,42 +4,76 @@ PaddlePaddle的Docker容器使用方式
PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 `Dockers设置 <https://github.com/PaddlePaddle/Paddle/issues/627>`_ 才能充分利用Mac OS X和Windows上的硬件资源。 PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 `Dockers设置 <https://github.com/PaddlePaddle/Paddle/issues/627>`_ 才能充分利用Mac OS X和Windows上的硬件资源。
纯CPU和GPU的docker镜像使用说明 PaddlePaddle发布的docker镜像使用说明
------------------------------ ------------------------------
对于每一个PaddlePaddle版本,我们都会发布两个Docker镜像:纯CPU的和GPU的 对于每一个PaddlePaddle版本,我们都会发布两种Docker镜像:开发镜像、运行镜像。运行镜像包括纯CPU版本和GPU版本以及其对应的非AVX版本
我们通过设置 `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ 自动生成最新的docker镜像: 我们会在 `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ 提供最新的docker镜像,可以在"tags"标签下找到最新的Paddle镜像版本。
`paddledev/paddle:0.10.0rc1-cpu` 和 `paddledev/paddle:0.10.0rc1-gpu`。 1. 开发镜像::code:`paddlepaddle/paddle:<version>-dev`
以交互容器方式运行纯CPU的镜像: 这个镜像包含了Paddle相关的开发工具以及编译和运行环境。用户可以使用开发镜像代替配置本地环境,完成开发,编译,发布,
文档编写等工作。由于不同的Paddle的版本可能需要不同的依赖和工具,所以如果需要自行配置开发环境需要考虑版本的因素。
开发镜像包含了以下工具:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
很多开发者会使用远程的安装有GPU的服务器工作,用户可以使用ssh登录到这台服务器上并执行 :code:`docker exec`进入开发镜像并开始工作,
也可以在开发镜像中启动一个SSHD服务,方便开发者直接登录到镜像中进行开发:
.. code-block:: bash 以交互容器方式运行开发镜像:
.. code-block:: bash
docker run -it --rm paddledev/paddle:0.10.0rc1-cpu /bin/bash docker run -it --rm paddledev/paddle:<version>-dev /bin/bash
或者,可以以后台进程方式运行容器: 或者,可以以后台进程方式运行容器:
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:0.10.0rc1-cpu docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:<version>-dev
然后用密码 :code:`root` SSH进入容器: 然后用密码 :code:`root` SSH进入容器:
.. code-block:: bash .. code-block:: bash
ssh -p 2202 root@localhost ssh -p 2202 root@localhost
SSH方式的一个优点是我们可以从多个终端进入容器。比如,一个终端运行vi,另一个终端运行Python。另一个好处是我们可以把PaddlePaddle容器运行在远程服务器上,并在笔记本上通过SSH与其连接。 SSH方式的一个优点是我们可以从多个终端进入容器。比如,一个终端运行vi,另一个终端运行Python。另一个好处是我们可以把PaddlePaddle容器运行在远程服务器上,并在笔记本上通过SSH与其连接。
2. 运行镜像:根据CPU、GPU和非AVX区分了如下4个镜像:
- GPU/AVX::code:`paddlepaddle/paddle:<version>-gpu`
- GPU/no-AVX::code:`paddlepaddle/paddle:<version>-gpu-noavx`
- CPU/AVX::code:`paddlepaddle/paddle:<version>`
- CPU/no-AVX::code:`paddlepaddle/paddle:<version>-noavx`
以上方法在GPU镜像里也能用-只是请不要忘记按装CUDA驱动,以及告诉Docker 纯CPU镜像以及GPU镜像都会用到AVX指令集,但是2008年之前生产的旧电脑不支持AVX。以下指令能检查Linux电脑是否支持AVX
.. code-block:: bash .. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
如果输出是No,就需要选择使用no-AVX的镜像
注意:在运行GPU版本的镜像时 安装CUDA驱动,以及告诉Docker:
.. code-block:: bash
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:0.10.0rc1-gpu docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
3. 使用运行镜像发布你的AI程序
假设您已经完成了一个AI训练的python程序 :code:`a.py`,这个程序是您在开发机上使用开发镜像完成开发。此时您可以运行这个命令在开发机上进行测试运行:
.. code-block:: bash
docker run -it -v $PWD:/work paddle /work/a.py
这里`a.py`包含的所有依赖假设都可以在Paddle的运行容器中。如果需要包含更多的依赖、或者需要发布您的应用的镜像,可以编写`Dockerfile`使用`FROM paddledev/paddle:<version>`
创建和发布自己的AI程序镜像。
运行PaddlePaddle书籍 运行PaddlePaddle书籍
--------------------- ---------------------
...@@ -49,11 +83,11 @@ Jupyter Notebook是一个开源的web程序,大家可以通过它制作和分 ...@@ -49,11 +83,11 @@ Jupyter Notebook是一个开源的web程序,大家可以通过它制作和分
PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nodebook。 PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nodebook。
如果您想要更深入了解deep learning,PaddlePaddle书籍一定是您最好的选择。 如果您想要更深入了解deep learning,PaddlePaddle书籍一定是您最好的选择。
当您进入容器内之后,只用运行以下命令 我们提供可以直接运行PaddlePaddle书籍的docker镜像,直接运行
.. code-block:: bash .. code-block:: bash
jupyter notebook docker run -p 8888:8888 paddlepaddle/book
然后在浏览器中输入以下网址: 然后在浏览器中输入以下网址:
...@@ -63,46 +97,25 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod ...@@ -63,46 +97,25 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod
就这么简单,享受您的旅程! 就这么简单,享受您的旅程!
非AVX镜像
---------
纯CPU镜像以及GPU镜像都会用到AVX指令集,但是2008年之前生产的旧电脑不支持AVX。以下指令能检查Linux电脑是否支持AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
如果输出是No,我们就需要手动编译一个非AVX版本的镜像:
.. code-block:: bash
cd ~
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
docker build --build-arg WITH_AVX=OFF -t paddle:cpu-noavx -f paddle/scripts/docker/Dockerfile .
docker build --build-arg WITH_AVX=OFF -t paddle:gpu-noavx -f paddle/scripts/docker/Dockerfile.gpu .
通过Docker容器开发PaddlePaddle 通过Docker容器开发PaddlePaddle
------------------------------ ------------------------------
开发人员可以在Docker中开发PaddlePaddle。这样开发人员可以以一致的方式在不同的平台上工作 - Linux,Mac OS X和Windows。 开发人员可以在Docker开发镜像中开发PaddlePaddle。这样开发人员可以以一致的方式在不同的平台上工作 - Linux,Mac OS X和Windows。
1. 将开发环境构建为Docker镜像 1. 构建开发镜像
.. code-block:: bash .. code-block:: bash
git clone --recursive https://github.com/PaddlePaddle/Paddle git clone --recursive https://github.com/PaddlePaddle/Paddle
cd Paddle cd Paddle
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . docker build -t paddle:dev .
请注意,默认情况下,:code:`docker build` 不会将源码导入到镜像中并编译它。如果我们想这样做,需要设置一个参数 请注意,默认情况下,:code:`docker build` 不会将源码导入到镜像中并编译它。如果我们想这样做,需要构建完开发镜像,然后执行
.. code-block:: bash .. code-block:: bash
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile --build-arg BUILD_AND_INSTALL=ON . docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "TEST=OFF" paddle:dev
2. 运行开发环境 2. 运行开发环境
...@@ -111,11 +124,11 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod ...@@ -111,11 +124,11 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -v $PWD:/paddle paddle:dev docker run -d -p 2202:22 -v $PWD:/paddle paddle:dev sshd
以上代码会启动一个带有PaddlePaddle开发环境的docker容器,源代码会被挂载到 :code:`/paddle` 。 以上代码会启动一个带有PaddlePaddle开发环境的docker容器,源代码会被挂载到 :code:`/paddle` 。
请注意, :code:`paddle:dev` 的默认入口是 :code:`sshd` 。以上的 :code:`docker run` 命令其实会启动一个在2202端口监听的SSHD服务器。这样,我们就能SSH进入我们的开发容器了: 以上的 :code:`docker run` 命令其实会启动一个在2202端口监听的SSHD服务器。这样,我们就能SSH进入我们的开发容器了:
.. code-block:: bash .. code-block:: bash
...@@ -140,14 +153,14 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod ...@@ -140,14 +153,14 @@ PaddlePaddle书籍是为用户和开发者制作的一个交互式的Jupyter Nod
文档 文档
---- ----
Paddle的Docker镜像带有一个通过 `woboq code browser Paddle的Docker开发镜像带有一个通过 `woboq code browser
<https://github.com/woboq/woboq_codebrowser>`_ 生成的HTML版本的C++源代码,便于用户浏览C++源码。 <https://github.com/woboq/woboq_codebrowser>`_ 生成的HTML版本的C++源代码,便于用户浏览C++源码。
只要在Docker里启动PaddlePaddle的时候给它一个名字,就可以再运行另一个Nginx Docker镜像来服务HTML代码: 只要在Docker里启动PaddlePaddle的时候给它一个名字,就可以再运行另一个Nginx Docker镜像来服务HTML代码:
.. code-block:: bash .. code-block:: bash
docker run -d --name paddle-cpu-doc paddle:0.10.0rc1-cpu docker run -d --name paddle-cpu-doc paddle:<version>-dev
docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx
接着我们就能够打开浏览器在 http://localhost:8088/paddle/ 浏览代码。 接着我们就能够打开浏览器在 http://localhost:8088/paddle/ 浏览代码。
...@@ -12,44 +12,91 @@ of your hardware resource on Mac OS X and Windows. ...@@ -12,44 +12,91 @@ of your hardware resource on Mac OS X and Windows.
Usage of CPU-only and GPU Images Usage of CPU-only and GPU Images
---------------------------------- ----------------------------------
For each version of PaddlePaddle, we release 2 Docker images, a For each version of PaddlePaddle, we release 2 types of Docker images: development
CPU-only one and a CUDA GPU one. We do so by configuring image and production image. Production image includes CPU-only version and a CUDA
`dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ GPU version and their no-AVX versions. We put the docker images on
automatically generate the latest docker images `paddledev/paddle:0.10.0rc1-cpu` `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_. You can find the
and `paddledev/paddle:0.10.0rc1-gpu`. latest versions under "tags" tab at dockerhub.com.
1. development image :code:`paddlepaddle/paddle:<version>-dev`
This image has packed related develop tools and runtime environment. Users and
developers can use this image instead of their own local computer to accomplish
development, build, releasing, document writing etc. While different version of
paddle may depends on different version of libraries and tools, if you want to
setup a local environment, you must pay attention to the versions.
The development image contains:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
Many developers use servers with GPUs, they can use ssh to login to the server
and run :code:`docker exec` to enter the docker container and start their work.
Also they can start a development docker image with SSHD service, so they can login to
the container and start work.
To run the CPU-only image as an interactive container:
To run the CPU-only image as an interactive container: .. code-block:: bash
.. code-block:: bash
docker run -it --rm paddledev/paddle:0.10.0rc1-cpu /bin/bash docker run -it --rm paddledev/paddle:<version> /bin/bash
or, we can run it as a daemon container or, we can run it as a daemon container
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:0.10.0rc1-cpu docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:<version>
and SSH to this container using password :code:`root`: and SSH to this container using password :code:`root`:
.. code-block:: bash .. code-block:: bash
ssh -p 2202 root@localhost ssh -p 2202 root@localhost
An advantage of using SSH is that we can connect to PaddlePaddle from An advantage of using SSH is that we can connect to PaddlePaddle from
more than one terminals. For example, one terminal running vi and more than one terminals. For example, one terminal running vi and
another one running Python interpreter. Another advantage is that we another one running Python interpreter. Another advantage is that we
can run the PaddlePaddle container on a remote server and SSH to it can run the PaddlePaddle container on a remote server and SSH to it
from a laptop. from a laptop.
Above methods work with the GPU image too -- just please don't forget
to install CUDA driver and let Docker knows about it:
.. code-block:: bash 2. Production images, this image might have multiple variants:
- GPU/AVX::code:`paddlepaddle/paddle:<version>-gpu`
- GPU/no-AVX::code:`paddlepaddle/paddle:<version>-gpu-noavx`
- CPU/AVX::code:`paddlepaddle/paddle:<version>`
- CPU/no-AVX::code:`paddlepaddle/paddle:<version>-noavx`
Please be aware that the CPU-only and the GPU images both use the AVX
instruction set, but old computers produced before 2008 do not support
AVX. The following command checks if your Linux computer supports
AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
If it doesn't, we will use the non-AVX images.
Notice please don't forget
to install CUDA driver and let Docker knows about it:
.. code-block:: bash
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:0.10.0rc1-gpu docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
3. Use production image to release you AI application
Suppose that we have a simple application program in :code:`a.py`, we can test and run it using the production image:
```bash
docker run -it -v $PWD:/work paddle /work/a.py
```
But this works only if all dependencies of :code:`a.py` are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.
PaddlePaddle Book PaddlePaddle Book
...@@ -63,11 +110,11 @@ PaddlePaddle Book is an interactive Jupyter Notebook for users and developers. ...@@ -63,11 +110,11 @@ PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
We already exposed port 8888 for this book. If you want to We already exposed port 8888 for this book. If you want to
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice. dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.
Once you are inside the container, simply issue the command: We provide a packaged book image, simply issue the command:
.. code-block:: bash .. code-block:: bash
jupyter notebook docker run -p 8888:8888 paddlepaddle/book
Then, you would back and paste the address into the local browser: Then, you would back and paste the address into the local browser:
...@@ -77,32 +124,6 @@ Then, you would back and paste the address into the local browser: ...@@ -77,32 +124,6 @@ Then, you would back and paste the address into the local browser:
That's all. Enjoy your journey! That's all. Enjoy your journey!
Non-AVX Images
--------------
Please be aware that the CPU-only and the GPU images both use the AVX
instruction set, but old computers produced before 2008 do not support
AVX. The following command checks if your Linux computer supports
AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
If it doesn't, we will need to build non-AVX images manually from
source code:
.. code-block:: bash
cd ~
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
docker build --build-arg WITH_AVX=OFF -t paddle:cpu-noavx -f paddle/scripts/docker/Dockerfile .
docker build --build-arg WITH_AVX=OFF -t paddle:gpu-noavx -f paddle/scripts/docker/Dockerfile.gpu .
Development Using Docker Development Using Docker
------------------------ ------------------------
...@@ -110,22 +131,21 @@ Developers can work on PaddlePaddle using Docker. This allows ...@@ -110,22 +131,21 @@ Developers can work on PaddlePaddle using Docker. This allows
developers to work on different platforms -- Linux, Mac OS X, and developers to work on different platforms -- Linux, Mac OS X, and
Windows -- in a consistent way. Windows -- in a consistent way.
1. Build the Development Environment as a Docker Image 1. Build the Development Docker Image
.. code-block:: bash .. code-block:: bash
git clone --recursive https://github.com/PaddlePaddle/Paddle git clone --recursive https://github.com/PaddlePaddle/Paddle
cd Paddle cd Paddle
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . docker build -t paddle:dev .
Note that by default :code:`docker build` wouldn't import source Note that by default :code:`docker build` wouldn't import source
tree into the image and build it. If we want to do that, we need tree into the image and build it. If we want to do that, we need docker the
to set a build arg: development docker image and then run the following command:
.. code-block:: bash .. code-block:: bash
docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile --build-arg BUILD_AND_INSTALL=ON . docker run -v $PWD:/paddle -e "WITH_GPU=OFF" -e "WITH_AVX=ON" -e "TEST=OFF" paddle:dev
2. Run the Development Environment 2. Run the Development Environment
...@@ -136,14 +156,13 @@ Windows -- in a consistent way. ...@@ -136,14 +156,13 @@ Windows -- in a consistent way.
.. code-block:: bash .. code-block:: bash
docker run -d -p 2202:22 -p 8888:8888 -v $PWD:/paddle paddle:dev docker run -d -p 2202:22 -p 8888:8888 -v $PWD:/paddle paddle:dev sshd
This runs a container of the development environment Docker image This runs a container of the development environment Docker image
with the local source tree mounted to :code:`/paddle` of the with the local source tree mounted to :code:`/paddle` of the
container. container.
Note that the default entry-point of :code:`paddle:dev` is The above :code:`docker run` commands actually starts
:code:`sshd`, and above :code:`docker run` commands actually starts
an SSHD server listening on port 2202. This allows us to log into an SSHD server listening on port 2202. This allows us to log into
this container with: this container with:
...@@ -191,7 +210,7 @@ container: ...@@ -191,7 +210,7 @@ container:
.. code-block:: bash .. code-block:: bash
docker run -d --name paddle-cpu-doc paddle:0.10.0rc1-cpu docker run -d --name paddle-cpu-doc paddle:<version>
docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx
......
...@@ -46,7 +46,6 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。 ...@@ -46,7 +46,6 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。
with_double: OFF with_double: OFF
with_python: ON with_python: ON
with_rdma: OFF with_rdma: OFF
with_metric_learning:
with_timer: OFF with_timer: OFF
with_predict_sdk: with_predict_sdk:
......
...@@ -76,8 +76,6 @@ SWIG_LINK_LIBRARIES(swig_paddle ...@@ -76,8 +76,6 @@ SWIG_LINK_LIBRARIES(swig_paddle
${CMAKE_DL_LIBS} ${CMAKE_DL_LIBS}
${EXTERNAL_LIBS} ${EXTERNAL_LIBS}
${CMAKE_THREAD_LIBS_INIT} ${CMAKE_THREAD_LIBS_INIT}
${RDMA_LD_FLAGS}
${RDMA_LIBS}
${START_END} ${START_END}
) )
......
...@@ -21,7 +21,6 @@ function version(){ ...@@ -21,7 +21,6 @@ function version(){
echo " with_double: @WITH_DOUBLE@" echo " with_double: @WITH_DOUBLE@"
echo " with_python: @WITH_PYTHON@" echo " with_python: @WITH_PYTHON@"
echo " with_rdma: @WITH_RDMA@" echo " with_rdma: @WITH_RDMA@"
echo " with_metric_learning: @WITH_METRIC@"
echo " with_timer: @WITH_TIMER@" echo " with_timer: @WITH_TIMER@"
echo " with_predict_sdk: @WITH_PREDICT_SDK@" echo " with_predict_sdk: @WITH_PREDICT_SDK@"
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册