docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
3. Use production image to release you AI application
Suppose that we have a simple application program in :code:`a.py`, we can test and run it using the production image:
```bash
docker run -it -v $PWD:/work paddle /work/a.py
```
But this works only if all dependencies of :code:`a.py` are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.
PaddlePaddle Book
PaddlePaddle Book
...
@@ -59,50 +106,24 @@ The Jupyter Notebook is an open-source web application that allows
...
@@ -59,50 +106,24 @@ The Jupyter Notebook is an open-source web application that allows
you to create and share documents that contain live code, equations,
you to create and share documents that contain live code, equations,
visualizations and explanatory text in a single browser.
visualizations and explanatory text in a single browser.
PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
We already exposed port 8888 for this book. If you want to
We already exposed port 8888 for this book. If you want to
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.
Once you are inside the container, simply issue the command:
We provide a packaged book image, simply issue the command:
.. code-block:: bash
.. code-block:: bash
jupyter notebook
docker run -p 8888:8888 paddlepaddle/book
Then, you would back and paste the address into the local browser:
Then, you would back and paste the address into the local browser:
.. code-block:: text
.. code-block:: text
http://localhost:8888/
http://localhost:8888/
That's all. Enjoy your journey!
That's all. Enjoy your journey!
Non-AVX Images
--------------
Please be aware that the CPU-only and the GPU images both use the AVX
instruction set, but old computers produced before 2008 do not support
AVX. The following command checks if your Linux computer supports
AVX:
.. code-block:: bash
if cat /proc/cpuinfo | grep -i avx; then echo Yes; else echo No; fi
If it doesn't, we will need to build non-AVX images manually from
GPU version and their no-AVX versions. We put the docker images on
automatically generate the latest docker images <cite>paddledev/paddle:0.10.0rc1-cpu</cite>
<aclass="reference external"href="https://hub.docker.com/r/paddledev/paddle/">dockerhub.com</a>. You can find the
and <cite>paddledev/paddle:0.10.0rc1-gpu</cite>.</p>
latest versions under “tags” tab at dockerhub.com.
1. development image <codeclass="code docutils literal"><spanclass="pre">paddlepaddle/paddle:<version>-dev</span></code></p>
<blockquote>
<div><p>This image has packed related develop tools and runtime environment. Users and
developers can use this image instead of their own local computer to accomplish
development, build, releasing, document writing etc. While different version of
paddle may depends on different version of libraries and tools, if you want to
setup a local environment, you must pay attention to the versions.
The development image contains:
- gcc/clang
- nvcc
- Python
- sphinx
- woboq
- sshd
Many developers use servers with GPUs, they can use ssh to login to the server
and run <codeclass="code docutils literal"><spanclass="pre">docker</span><spanclass="pre">exec</span></code> to enter the docker container and start their work.
Also they can start a development docker image with SSHD service, so they can login to
the container and start work.</p>
<p>To run the CPU-only image as an interactive container:</p>
<p>To run the CPU-only image as an interactive container:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -it --rm paddledev/paddle:0.10.0rc1-cpu /bin/bash
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -it --rm paddledev/paddle:<version> /bin/bash
</pre></div>
</pre></div>
</div>
</div>
<p>or, we can run it as a daemon container</p>
<p>or, we can run it as a daemon container</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -d -p <spanclass="m">2202</span>:22 -p <spanclass="m">8888</span>:8888 paddledev/paddle:0.10.0rc1-cpu
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -d -p <spanclass="m">2202</span>:22 -p <spanclass="m">8888</span>:8888 paddledev/paddle:<version>
</pre></div>
</pre></div>
</div>
</div>
<p>and SSH to this container using password <codeclass="code docutils literal"><spanclass="pre">root</span></code>:</p>
<p>and SSH to this container using password <codeclass="code docutils literal"><spanclass="pre">root</span></code>:</p>
...
@@ -245,13 +262,47 @@ more than one terminals. For example, one terminal running vi and
...
@@ -245,13 +262,47 @@ more than one terminals. For example, one terminal running vi and
another one running Python interpreter. Another advantage is that we
another one running Python interpreter. Another advantage is that we
can run the PaddlePaddle container on a remote server and SSH to it
can run the PaddlePaddle container on a remote server and SSH to it
from a laptop.</p>
from a laptop.</p>
<p>Above methods work with the GPU image too – just please don’t forget
</div></blockquote>
<olclass="arabic"start="2">
<li><dlclass="first docutils">
<dt>Production images, this image might have multiple variants:</dt>
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:0.10.0rc1-gpu
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:<version>-gpu
</pre></div>
</pre></div>
</div>
</div>
</dd>
</dl>
</li>
<li><dlclass="first docutils">
<dt>Use production image to release you AI application</dt>
<dd><pclass="first">Suppose that we have a simple application program in <codeclass="code docutils literal"><spanclass="pre">a.py</span></code>, we can test and run it using the production image:</p>
<pclass="last">But this works only if all dependencies of <codeclass="code docutils literal"><spanclass="pre">a.py</span></code> are in the production image. If this is not the case, we need to build a new Docker image from the production image and with more dependencies installs.</p>
</dd>
</dl>
</li>
</ol>
</div>
</div>
<divclass="section"id="paddlepaddle-book">
<divclass="section"id="paddlepaddle-book">
<h2>PaddlePaddle Book<aclass="headerlink"href="#paddlepaddle-book"title="Permalink to this headline">¶</a></h2>
<h2>PaddlePaddle Book<aclass="headerlink"href="#paddlepaddle-book"title="Permalink to this headline">¶</a></h2>
...
@@ -261,8 +312,8 @@ visualizations and explanatory text in a single browser.</p>
...
@@ -261,8 +312,8 @@ visualizations and explanatory text in a single browser.</p>
<p>PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
<p>PaddlePaddle Book is an interactive Jupyter Notebook for users and developers.
We already exposed port 8888 for this book. If you want to
We already exposed port 8888 for this book. If you want to
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.</p>
dig deeper into deep learning, PaddlePaddle Book definitely is your best choice.</p>
<p>Once you are inside the container, simply issue the command:</p>
<p>We provide a packaged book image, simply issue the command:</p>
<p>Note that by default <codeclass="code docutils literal"><spanclass="pre">docker</span><spanclass="pre">build</span></code> wouldn’t import source
<p>Note that by default <codeclass="code docutils literal"><spanclass="pre">docker</span><spanclass="pre">build</span></code> wouldn’t import source
tree into the image and build it. If we want to do that, we need
tree into the image and build it. If we want to do that, we need docker the
to set a build arg:</p>
development docker image and then run the following command:</p>
PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 `Dockers设置 <https://github.com/PaddlePaddle/Paddle/issues/627>`_ 才能充分利用Mac OS X和Windows上的硬件资源。
PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 `Dockers设置 <https://github.com/PaddlePaddle/Paddle/issues/627>`_ 才能充分利用Mac OS X和Windows上的硬件资源。
<p>PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/issues/627">Dockers设置</a> 才能充分利用Mac OS X和Windows上的硬件资源。</p>
<p>PaddlePaddle目前唯一官方支持的运行的方式是Docker容器。因为Docker能在所有主要操作系统(包括Linux,Mac OS X和Windows)上运行。 请注意,您需要更改 <aclass="reference external"href="https://github.com/PaddlePaddle/Paddle/issues/627">Dockers设置</a> 才能充分利用Mac OS X和Windows上的硬件资源。</p>
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:0.10.0rc1-gpu
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:<version>-gpu