@@ -8,199 +8,255 @@ Please be aware that you will need to change `Dockers settings
<https://github.com/PaddlePaddle/Paddle/issues/627>`_ to make full use
of your hardware resource on Mac OS X and Windows.
Working With Docker
-------------------
Docker is simple as long as we understand a few basic concepts:
- *image*: A Docker image is a pack of software. It could contain one or more programs and all their dependencies. For example, the PaddlePaddle's Docker image includes pre-built PaddlePaddle and Python and many Python packages. We can run a Docker image directly, other than installing all these software. We can type
.. code-block:: bash
docker images
to list all images in the system. We can also run
.. code-block:: bash
docker pull paddlepaddle/paddle:0.10.0rc2
to download a Docker image, paddlepaddle/paddle in this example,
from Dockerhub.com.
- *container*: considering a Docker image a program, a container is a
"process" that runs the image. Indeed, a container is exactly an
operating system process, but with a virtualized filesystem, network
port space, and other virtualized environment. We can type
.. code-block:: bash
docker run paddlepaddle/paddle:0.10.0rc2
to start a container to run a Docker image, paddlepaddle/paddle in this example.
- By default docker container have an isolated file system namespace,
we can not see the files in the host file system. By using *volume*,
mounted files in host will be visible inside docker container.
Following command will mount current dirctory into /data inside
docker container, run docker container from debian image with
command :code:`ls /data`.
.. code-block:: bash
docker run --rm -v $(pwd):/data debian ls /data
Usage of CPU-only and GPU Images
----------------------------------
For each version of PaddlePaddle, we release 2 types of Docker images: development
image and production image. Production image includes CPU-only version and a CUDA
GPU version and their no-AVX versions. We put the docker images on
`dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_. You can find the
latest versions under "tags" tab at dockerhub.com.
1. development image :code:`paddlepaddle/paddle:<version>-dev`
For each version of PaddlePaddle, we release two types of Docker images:
development image and production image. Production image includes
CPU-only version and a CUDA GPU version and their no-AVX versions. We
put the docker images on `dockerhub.com
<https://hub.docker.com/r/paddledev/paddle/>`_. You can find the
latest versions under "tags" tab at dockerhub.com
This image has packed related develop tools and runtime environment. Users and
developers can use this image instead of their own local computer to accomplish
development, build, releasing, document writing etc. While different version of
paddle may depends on different version of libraries and tools, if you want to
setup a local environment, you must pay attention to the versions.
The development image contains:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
Many developers use servers with GPUs, they can use ssh to login to the server
and run :code:`docker exec` to enter the docker container and start their work.
Also they can start a development docker image with SSHD service, so they can login to
the container and start work.
1. Production images, this image might have multiple variants:
To run the CPU-only image as an interactive container:
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
emacs ~/workspace/example.py
Run the program using docker:
3. Use production image to release you AI application
Suppose that we have a simple application program in :code:`a.py`, we can test and run it using the production image:
.. code-block:: bash
```bash
docker run -it -v $PWD:/work paddle /work/a.py
```
docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 python /workspace/example.py
But this works only if all dependencies of :code:`a.py` are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.
Or if you are using GPU for training:
.. code-block:: bash
PaddlePaddle Book
------------------
nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu python /workspace/example.py
The Jupyter Notebook is an open-source web application that allows
you to create and share documents that contain live code, equations,
visualizations and explanatory text in a single browser.
Above commands will start a docker container by running :code:`python
/workspace/example.py`. It will stop once :code:`python
/workspace/example.py` finishes.
PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
We already exposed port 8888 for this book. If you want to
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.
Another way is to tell docker to start a :code:`/bin/bash` session and
run PaddlePaddle program interactively:
We provide a packaged book image, simply issue the command:
.. code-block:: bash
docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 /bin/bash
# now we are inside docker container
cd /workspace
python example.py
Running with GPU is identical:
.. code-block:: bash
docker run -p 8888:8888 paddlepaddle/book
nvidia-docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
# now we are inside docker container
cd /workspace
python example.py
Then, you would back and paste the address into the local browser:
@@ -220,169 +222,204 @@ running PaddlePaddle. This is reasonable as Docker now runs on all
major operating systems including Linux, Mac OS X, and Windows.
Please be aware that you will need to change <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/issues/627">Dockers settings</a> to make full use
of your hardware resource on Mac OS X and Windows.</p>
<h2>Usage of CPU-only and GPU Images<aclass="headerlink"href="#usage-of-cpu-only-and-gpu-images"title="Permalink to this headline">¶</a></h2>
<p>For each version of PaddlePaddle, we release 2 types of Docker images: development
image and production image. Production image includes CPU-only version and a CUDA
GPU version and their no-AVX versions. We put the docker images on
<aclass="reference external"href="https://hub.docker.com/r/paddledev/paddle/">dockerhub.com</a>. You can find the
latest versions under “tags” tab at dockerhub.com.
1. development image <codeclass="code docutils literal"><spanclass="pre">paddlepaddle/paddle:<version>-dev</span></code></p>
<blockquote>
<div><p>This image has packed related develop tools and runtime environment. Users and
developers can use this image instead of their own local computer to accomplish
development, build, releasing, document writing etc. While different version of
paddle may depends on different version of libraries and tools, if you want to
setup a local environment, you must pay attention to the versions.
The development image contains:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
Many developers use servers with GPUs, they can use ssh to login to the server
and run <codeclass="code docutils literal"><spanclass="pre">docker</span><spanclass="pre">exec</span></code> to enter the docker container and start their work.
Also they can start a development docker image with SSHD service, so they can login to
the container and start work.</p>
<p>To run the CPU-only image as an interactive container:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -it --rm paddledev/paddle:<version> /bin/bash
<divclass="section"id="working-with-docker">
<h2>Working With Docker<aclass="headerlink"href="#working-with-docker"title="Permalink to this headline">¶</a></h2>
<p>Docker is simple as long as we understand a few basic concepts:</p>
<ul>
<li><pclass="first"><em>image</em>: A Docker image is a pack of software. It could contain one or more programs and all their dependencies. For example, the PaddlePaddle’s Docker image includes pre-built PaddlePaddle and Python and many Python packages. We can run a Docker image directly, other than installing all these software. We can type</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run --rm -v <spanclass="k">$(</span><spanclass="nb">pwd</span><spanclass="k">)</span>:/data debian ls /data
</pre></div>
</div>
<p>An advantage of using SSH is that we can connect to PaddlePaddle from
more than one terminals. For example, one terminal running vi and
another one running Python interpreter. Another advantage is that we
can run the PaddlePaddle container on a remote server and SSH to it
from a laptop.</p>
</div></blockquote>
<olclass="arabic"start="2">
<li><dlclass="first docutils">
<dt>Production images, this image might have multiple variants:</dt>
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:<version>-gpu
<p>Above method work with the GPU image too – the recommended way is
using <aclass="reference external"href="https://github.com/NVIDIA/nvidia-docker">nvidia-docker</a>.</p>
<p>Please install nvidia-docker first following this <aclass="reference external"href="https://github.com/NVIDIA/nvidia-docker#quick-start">tutorial</a>.</p>
<p>Now you can run a GPU image:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
</pre></div>
</div>
</dd>
</dl>
</li>
<li><dlclass="first docutils">
<dt>Use production image to release you AI application</dt>
<dd><pclass="first">Suppose that we have a simple application program in <codeclass="code docutils literal"><spanclass="pre">a.py</span></code>, we can test and run it using the production image:</p>
<pclass="last">But this works only if all dependencies of <codeclass="code docutils literal"><spanclass="pre">a.py</span></code> are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu python /workspace/example.py
</pre></div>
</div>
<p>Note that by default <codeclass="code docutils literal"><spanclass="pre">docker</span><spanclass="pre">build</span></code> wouldn’t import source
tree into the image and build it. If we want to do that, we need docker the
development docker image and then run the following command:</p>
<h2>Develop PaddlePaddle or Train Model Using C++ API<aclass="headerlink"href="#develop-paddlepaddle-or-train-model-using-c-api"title="Permalink to this headline">¶</a></h2>
<p>We will be using PaddlePaddle development image since it contains all