We release PaddlePaddle in the form of `Docker <https://www.docker.com/>`_ images on `dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_. Running as Docker containers is currently the only officially-supported way to running PaddlePaddle.
Docker container is currently the only officially-supported way to
running PaddlePaddle. This is reasonable as Docker now runs on all
major operating systems including Linux, Mac OS X, and Windows.
Please be aware that you will need to change `Dockers settings
<https://github.com/PaddlePaddle/Paddle/issues/627>`_ to make full use
of your hardware resource on Mac OS X and Windows.
Run Docker images
-----------------
For each version of PaddlePaddle, we release 4 variants of Docker images:
CPU-only and GPU Images
-----------------------
+-----------------+-------------+-------+
| | CPU AVX | GPU |
+=================+=============+=======+
| cpu | yes | no |
+-----------------+-------------+-------+
| cpu-noavx | no | no |
+-----------------+-------------+-------+
| gpu | yes | yes |
+-----------------+-------------+-------+
| gpu-noavx | no | yes |
+-----------------+-------------+-------+
For each version of PaddlePaddle, we release 2 Docker images, a
CPU-only one and a CUDA GPU one. We do so by configuring
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
On Mac OS X, we need to run
To run the CPU-only image as an interactive container:
.. code-block:: bash
sysctl -a | grep machdep.cpu.leaf7_features
docker run -it --rm paddledev/paddle:cpu-latest /bin/bash
Once we determine the proper variant, we can cope with the Docker image tag name by appending the version number. For example, the following command runs the AVX-enabled image of the most recent version:
or, we can run it as a daemon container
.. code-block:: bash
docker run -it --rm paddledev/paddle:cpu-latest /bin/bash
docker run -d -p 2202:22 paddledev/paddle:cpu-latest
To run a GPU-enabled image, you need to install CUDA and let Docker knows about it:
and SSH to this container using password :code:`root`:
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest
The default entry point of all our Docker images starts the OpenSSH server. To run PaddlePaddle and to expose OpenSSH port to 2202 on the host computer:
ssh -p 2202 root@localhost
.. code-block:: bash
An advantage of using SSH is that we can connect to PaddlePaddle from
more than one terminals. For example, one terminal running vi and
another one running Python interpreter. Another advantage is that we
can run the PaddlePaddle container on a remote server and SSH to it
from a laptop.
docker run -d -p 2202:22 paddledev/paddle:cpu-latest
Then we can login to the container using username :code:`root` and password :code:`root`:
Above methods work with the GPU image too -- just please don't forget
to install CUDA driver and let Docker knows about it:
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest
Build Docker images
-------------------
Non-AVX Images
--------------
Developers might want to build Docker images from their local commit or from a tagged version. Suppose that your local repo is at :code:`~/work/Paddle`, the following steps builds a cpu variant from your current work:
Please be aware that the CPU-only and the GPU images both use the AVX
instruction set, but old computers produced before 2008 do not support
AVX. The following command checks if your Linux computer supports
AVX:
.. code-block:: bash
cd ~/Paddle
./paddle/scripts/docker/generates.sh # Use m4 to generate Dockerfiles for each variant.
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
As a release engineer, you might want to build Docker images for a certain version and publish them to dockerhub.com. You can do this by switching to the right Git tag, or create a new tag, before running `docker build`. For example, the following commands build Docker images for v0.9.0:
If it doesn't, we will need to build non-AVX images manually from
source code:
.. code-block:: bash
cd ~/Paddle
git checkout tags/v0.9.0
./paddle/scripts/docker/generates.sh # Use m4 to generate Dockerfiles for each variant.