提交 6708c2cb 编写于 作者: T Travis CI

Deploy to GitHub Pages: 8c1ea31e

上级 2fb6677f
......@@ -23,7 +23,7 @@ Docker is simple as long as we understand a few basic concepts:
.. code-block:: bash
docker pull paddlepaddle/paddle:0.10.0rc2
docker pull paddlepaddle/paddle:0.10.0
to download a Docker image, paddlepaddle/paddle in this example,
from Dockerhub.com.
......@@ -35,7 +35,7 @@ Docker is simple as long as we understand a few basic concepts:
.. code-block:: bash
docker run paddlepaddle/paddle:0.10.0rc2
docker run paddlepaddle/paddle:0.10.0
to start a container to run a Docker image, paddlepaddle/paddle in this example.
......@@ -62,7 +62,7 @@ of PaddlePaddle, we release both of them. Production image includes
CPU-only version and a CUDA GPU version and their no-AVX versions.
We put the docker images on `dockerhub.com
<https://hub.docker.com/r/paddledev/paddle/>`_. You can find the
<https://hub.docker.com/r/paddlepaddle/paddle/tags/>`_. You can find the
latest versions under "tags" tab at dockerhub.com. If you are in
China, you can use our Docker image registry mirror to speed up the
download process. To use it, please replace all paddlepaddle/paddle in
......@@ -89,7 +89,7 @@ the commands to docker.paddlepaddle.org/paddle.
.. code-block:: bash
docker run -it --rm paddlepaddle/paddle:0.10.0rc2 /bin/bash
docker run -it --rm paddlepaddle/paddle:0.10.0 /bin/bash
Above method work with the GPU image too -- the recommended way is
using `nvidia-docker <https://github.com/NVIDIA/nvidia-docker>`_.
......@@ -101,7 +101,7 @@ the commands to docker.paddlepaddle.org/paddle.
.. code-block:: bash
nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0-gpu /bin/bash
2. development image :code:`paddlepaddle/paddle:<version>-dev`
......@@ -149,13 +149,13 @@ Run the program using docker:
.. code-block:: bash
docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 python /workspace/example.py
docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0 python /workspace/example.py
Or if you are using GPU for training:
.. code-block:: bash
nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu python /workspace/example.py
nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0-gpu python /workspace/example.py
Above commands will start a docker container by running :code:`python
/workspace/example.py`. It will stop once :code:`python
......@@ -166,7 +166,7 @@ run PaddlePaddle program interactively:
.. code-block:: bash
docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 /bin/bash
docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0 /bin/bash
# now we are inside docker container
cd /workspace
python example.py
......@@ -175,7 +175,7 @@ Running with GPU is identical:
.. code-block:: bash
nvidia-docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
nvidia-docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0-gpu /bin/bash
# now we are inside docker container
cd /workspace
python example.py
......
......@@ -199,7 +199,7 @@ of your hardware resource on Mac OS X and Windows.</p>
</pre></div>
</div>
<p>to list all images in the system. We can also run</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker pull paddlepaddle/paddle:0.10.0rc2
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker pull paddlepaddle/paddle:0.10.0
</pre></div>
</div>
<p>to download a Docker image, paddlepaddle/paddle in this example,
......@@ -209,7 +209,7 @@ from Dockerhub.com.</p>
&#8220;process&#8221; that runs the image. Indeed, a container is exactly an
operating system process, but with a virtualized filesystem, network
port space, and other virtualized environment. We can type</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run paddlepaddle/paddle:0.10.0rc2
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run paddlepaddle/paddle:0.10.0
</pre></div>
</div>
<p>to start a container to run a Docker image, paddlepaddle/paddle in this example.</p>
......@@ -235,7 +235,7 @@ Docker image as well, called the production image, it contains all
runtime environment that running PaddlePaddle needs. For each version
of PaddlePaddle, we release both of them. Production image includes
CPU-only version and a CUDA GPU version and their no-AVX versions.</p>
<p>We put the docker images on <a class="reference external" href="https://hub.docker.com/r/paddledev/paddle/">dockerhub.com</a>. You can find the
<p>We put the docker images on <a class="reference external" href="https://hub.docker.com/r/paddlepaddle/paddle/tags/">dockerhub.com</a>. You can find the
latest versions under &#8220;tags&#8221; tab at dockerhub.com. If you are in
China, you can use our Docker image registry mirror to speed up the
download process. To use it, please replace all paddlepaddle/paddle in
......@@ -256,14 +256,14 @@ supports AVX:</p>
</pre></div>
</div>
<p>To run the CPU-only image as an interactive container:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it --rm paddlepaddle/paddle:0.10.0rc2 /bin/bash
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it --rm paddlepaddle/paddle:0.10.0 /bin/bash
</pre></div>
</div>
<p>Above method work with the GPU image too &#8211; the recommended way is
using <a class="reference external" href="https://github.com/NVIDIA/nvidia-docker">nvidia-docker</a>.</p>
<p>Please install nvidia-docker first following this <a class="reference external" href="https://github.com/NVIDIA/nvidia-docker#quick-start">tutorial</a>.</p>
<p>Now you can run a GPU image:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0-gpu /bin/bash
</pre></div>
</div>
</li>
......@@ -304,11 +304,11 @@ programs. The typical workflow will be as follows:</p>
</pre></div>
</div>
<p>Run the program using docker:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 python /workspace/example.py
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0 python /workspace/example.py
</pre></div>
</div>
<p>Or if you are using GPU for training:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu python /workspace/example.py
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0-gpu python /workspace/example.py
</pre></div>
</div>
<p>Above commands will start a docker container by running <code class="code docutils literal"><span class="pre">python</span>
......@@ -316,14 +316,14 @@ programs. The typical workflow will be as follows:</p>
<span class="pre">/workspace/example.py</span></code> finishes.</p>
<p>Another way is to tell docker to start a <code class="code docutils literal"><span class="pre">/bin/bash</span></code> session and
run PaddlePaddle program interactively:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 /bin/bash
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0 /bin/bash
<span class="c1"># now we are inside docker container</span>
<span class="nb">cd</span> /workspace
python example.py
</pre></div>
</div>
<p>Running with GPU is identical:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0-gpu /bin/bash
<span class="c1"># now we are inside docker container</span>
<span class="nb">cd</span> /workspace
python example.py
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -12,13 +12,13 @@ PaddlePaddle需要的所有编译工具。把编译出来的PaddlePaddle也打
像,称为生产镜像,里面涵盖了PaddlePaddle运行所需的所有环境。每次
PaddlePaddle发布新版本的时候都会发布对应版本的生产镜像以及开发镜像。运
行镜像包括纯CPU版本和GPU版本以及其对应的非AVX版本。我们会在
`dockerhub.com <https://hub.docker.com/r/paddledev/paddle/>`_ 提供最新
`dockerhub.com <https://hub.docker.com/r/paddlepaddle/paddle/tags/>`_ 提供最新
的Docker镜像,可以在"tags"标签下找到最新的Paddle镜像版本。为了方便在国
内的开发者下载Docker镜像,我们提供了国内的镜像服务器供大家使用。如果您
在国内,请把文档里命令中的paddlepaddle/paddle替换成
docker.paddlepaddle.org/paddle。
1. 开发镜像::code:`paddlepaddle/paddle:<version>-dev`
1. 开发镜像::code:`paddlepaddle/paddle:0.10.0-dev`
这个镜像包含了Paddle相关的开发工具以及编译和运行环境。用户可以使用开发镜像代替配置本地环境,完成开发,编译,发布,
文档编写等工作。由于不同的Paddle的版本可能需要不同的依赖和工具,所以如果需要自行配置开发环境需要考虑版本的因素。
......@@ -37,13 +37,13 @@ docker.paddlepaddle.org/paddle。
.. code-block:: bash
docker run -it --rm paddlepaddle/paddle:<version>-dev /bin/bash
docker run -it --rm paddlepaddle/paddle:0.10.0-dev /bin/bash
或者,可以以后台进程方式运行容器:
.. code-block:: bash
docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:<version>-dev
docker run -d -p 2202:22 -p 8888:8888 paddledev/paddle:0.10.0-dev
然后用密码 :code:`root` SSH进入容器:
......@@ -73,7 +73,7 @@ docker.paddlepaddle.org/paddle。
.. code-block:: bash
nvidia-docker run -it --rm paddledev/paddle:0.10.0rc1-gpu /bin/bash
nvidia-docker run -it --rm paddledev/paddle:0.10.0-gpu /bin/bash
注意: 如果使用nvidia-docker存在问题,你也许可以尝试更老的方法,具体如下,但是我们并不推荐这种方法。:
......@@ -81,7 +81,7 @@ docker.paddlepaddle.org/paddle。
export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:<version>-gpu
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:0.10.0-gpu
3. 运行以及发布您的AI程序
......@@ -98,7 +98,7 @@ docker.paddlepaddle.org/paddle。
nvidia-docker run -it -v $PWD:/work paddle /work/a.py
这里`a.py`包含的所有依赖假设都可以在Paddle的运行容器中。如果需要包含更多的依赖、或者需要发布您的应用的镜像,可以编写`Dockerfile`使用`FROM paddledev/paddle:<version>`
这里`a.py`包含的所有依赖假设都可以在Paddle的运行容器中。如果需要包含更多的依赖、或者需要发布您的应用的镜像,可以编写`Dockerfile`使用`FROM paddledev/paddle:0.10.0`
创建和发布自己的AI程序镜像。
运行PaddlePaddle Book
......@@ -177,7 +177,7 @@ Paddle的Docker开发镜像带有一个通过 `woboq code browser
.. code-block:: bash
docker run -d --name paddle-cpu-doc paddle:<version>-dev
docker run -d --name paddle-cpu-doc paddle:0.10.0-dev
docker run -d --volumes-from paddle-cpu-doc -p 8088:80 nginx
接着我们就能够打开浏览器在 http://localhost:8088/paddle/ 浏览代码。
......@@ -200,13 +200,13 @@ PaddlePaddle需要的所有编译工具。把编译出来的PaddlePaddle也打
像,称为生产镜像,里面涵盖了PaddlePaddle运行所需的所有环境。每次
PaddlePaddle发布新版本的时候都会发布对应版本的生产镜像以及开发镜像。运
行镜像包括纯CPU版本和GPU版本以及其对应的非AVX版本。我们会在
<a class="reference external" href="https://hub.docker.com/r/paddledev/paddle/">dockerhub.com</a> 提供最新
<a class="reference external" href="https://hub.docker.com/r/paddlepaddle/paddle/tags/">dockerhub.com</a> 提供最新
的Docker镜像,可以在&#8221;tags&#8221;标签下找到最新的Paddle镜像版本。为了方便在国
内的开发者下载Docker镜像,我们提供了国内的镜像服务器供大家使用。如果您
在国内,请把文档里命令中的paddlepaddle/paddle替换成
docker.paddlepaddle.org/paddle。</p>
<ol class="arabic">
<li><p class="first">开发镜像:<code class="code docutils literal"><span class="pre">paddlepaddle/paddle:&lt;version&gt;-dev</span></code></p>
<li><p class="first">开发镜像:<code class="code docutils literal"><span class="pre">paddlepaddle/paddle:0.10.0-dev</span></code></p>
<p>这个镜像包含了Paddle相关的开发工具以及编译和运行环境。用户可以使用开发镜像代替配置本地环境,完成开发,编译,发布,
文档编写等工作。由于不同的Paddle的版本可能需要不同的依赖和工具,所以如果需要自行配置开发环境需要考虑版本的因素。
开发镜像包含了以下工具:</p>
......@@ -221,11 +221,11 @@ docker.paddlepaddle.org/paddle。</p>
<p>很多开发者会使用远程的安装有GPU的服务器工作,用户可以使用ssh登录到这台服务器上并执行 :code:<a href="#id2"><span class="problematic" id="id3">`</span></a>docker exec`进入开发镜像并开始工作,
也可以在开发镜像中启动一个SSHD服务,方便开发者直接登录到镜像中进行开发:</p>
<p>以交互容器方式运行开发镜像:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it --rm paddlepaddle/paddle:&lt;version&gt;-dev /bin/bash
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -it --rm paddlepaddle/paddle:0.10.0-dev /bin/bash
</pre></div>
</div>
<p>或者,可以以后台进程方式运行容器:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -d -p <span class="m">2202</span>:22 -p <span class="m">8888</span>:8888 paddledev/paddle:&lt;version&gt;-dev
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -d -p <span class="m">2202</span>:22 -p <span class="m">8888</span>:8888 paddledev/paddle:0.10.0-dev
</pre></div>
</div>
<p>然后用密码 <code class="code docutils literal"><span class="pre">root</span></code> SSH进入容器:</p>
......@@ -248,13 +248,13 @@ docker.paddlepaddle.org/paddle。</p>
<p>如果输出是No,就需要选择使用no-AVX的镜像</p>
<p>以上方法在GPU镜像里也能用,只是请不要忘记提前在物理机上安装GPU最新驱动。
为了保证GPU驱动能够在镜像里面正常运行,我们推荐使用[nvidia-docker](<a class="reference external" href="https://github.com/NVIDIA/nvidia-docker">https://github.com/NVIDIA/nvidia-docker</a>)来运行镜像。</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it --rm paddledev/paddle:0.10.0rc1-gpu /bin/bash
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it --rm paddledev/paddle:0.10.0-gpu /bin/bash
</pre></div>
</div>
<p>注意: 如果使用nvidia-docker存在问题,你也许可以尝试更老的方法,具体如下,但是我们并不推荐这种方法。:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">CUDA_SO</span><span class="o">=</span><span class="s2">&quot;</span><span class="k">$(</span><span class="se">\l</span>s /usr/lib64/libcuda* <span class="p">|</span> xargs -I<span class="o">{}</span> <span class="nb">echo</span> <span class="s1">&#39;-v {}:{}&#39;</span><span class="k">)</span><span class="s2"> </span><span class="k">$(</span><span class="se">\l</span>s /usr/lib64/libnvidia* <span class="p">|</span> xargs -I<span class="o">{}</span> <span class="nb">echo</span> <span class="s1">&#39;-v {}:{}&#39;</span><span class="k">)</span><span class="s2">&quot;</span>
<span class="nb">export</span> <span class="nv">DEVICES</span><span class="o">=</span><span class="k">$(</span><span class="se">\l</span>s /dev/nvidia* <span class="p">|</span> xargs -I<span class="o">{}</span> <span class="nb">echo</span> <span class="s1">&#39;--device {}:{}&#39;</span><span class="k">)</span>
docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span class="si">}</span> <span class="si">${</span><span class="nv">DEVICES</span><span class="si">}</span> -it paddledev/paddle:&lt;version&gt;-gpu
docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span class="si">}</span> <span class="si">${</span><span class="nv">DEVICES</span><span class="si">}</span> -it paddledev/paddle:0.10.0-gpu
</pre></div>
</div>
</li>
......@@ -267,7 +267,7 @@ docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span class=
<div class="highlight-bash"><div class="highlight"><pre><span></span>nvidia-docker run -it -v <span class="nv">$PWD</span>:/work paddle /work/a.py
</pre></div>
</div>
<p>这里`a.py`包含的所有依赖假设都可以在Paddle的运行容器中。如果需要包含更多的依赖、或者需要发布您的应用的镜像,可以编写`Dockerfile`使用`FROM paddledev/paddle:&lt;version&gt;`
<p>这里`a.py`包含的所有依赖假设都可以在Paddle的运行容器中。如果需要包含更多的依赖、或者需要发布您的应用的镜像,可以编写`Dockerfile`使用`FROM paddledev/paddle:0.10.0`
创建和发布自己的AI程序镜像。</p>
</li>
</ol>
......@@ -325,7 +325,7 @@ docker build -t paddle:dev .
<h2>文档<a class="headerlink" href="#id4" title="永久链接至标题"></a></h2>
<p>Paddle的Docker开发镜像带有一个通过 <a class="reference external" href="https://github.com/woboq/woboq_codebrowser">woboq code browser</a> 生成的HTML版本的C++源代码,便于用户浏览C++源码。</p>
<p>只要在Docker里启动PaddlePaddle的时候给它一个名字,就可以再运行另一个Nginx Docker镜像来服务HTML代码:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -d --name paddle-cpu-doc paddle:&lt;version&gt;-dev
<div class="highlight-bash"><div class="highlight"><pre><span></span>docker run -d --name paddle-cpu-doc paddle:0.10.0-dev
docker run -d --volumes-from paddle-cpu-doc -p <span class="m">8088</span>:80 nginx
</pre></div>
</div>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册