<p>to download a Docker image, paddlepaddle/paddle in this example,
<p>to download a Docker image, paddlepaddle/paddle in this example,
...
@@ -209,7 +209,7 @@ from Dockerhub.com.</p>
...
@@ -209,7 +209,7 @@ from Dockerhub.com.</p>
“process” that runs the image. Indeed, a container is exactly an
“process” that runs the image. Indeed, a container is exactly an
operating system process, but with a virtualized filesystem, network
operating system process, but with a virtualized filesystem, network
port space, and other virtualized environment. We can type</p>
port space, and other virtualized environment. We can type</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run paddlepaddle/paddle:0.10.0rc2
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run paddlepaddle/paddle:0.10.0
</pre></div>
</pre></div>
</div>
</div>
<p>to start a container to run a Docker image, paddlepaddle/paddle in this example.</p>
<p>to start a container to run a Docker image, paddlepaddle/paddle in this example.</p>
...
@@ -235,7 +235,7 @@ Docker image as well, called the production image, it contains all
...
@@ -235,7 +235,7 @@ Docker image as well, called the production image, it contains all
runtime environment that running PaddlePaddle needs. For each version
runtime environment that running PaddlePaddle needs. For each version
of PaddlePaddle, we release both of them. Production image includes
of PaddlePaddle, we release both of them. Production image includes
CPU-only version and a CUDA GPU version and their no-AVX versions.</p>
CPU-only version and a CUDA GPU version and their no-AVX versions.</p>
<p>We put the docker images on <aclass="reference external"href="https://hub.docker.com/r/paddledev/paddle/">dockerhub.com</a>. You can find the
<p>We put the docker images on <aclass="reference external"href="https://hub.docker.com/r/paddlepaddle/paddle/tags/">dockerhub.com</a>. You can find the
latest versions under “tags” tab at dockerhub.com. If you are in
latest versions under “tags” tab at dockerhub.com. If you are in
China, you can use our Docker image registry mirror to speed up the
China, you can use our Docker image registry mirror to speed up the
download process. To use it, please replace all paddlepaddle/paddle in
download process. To use it, please replace all paddlepaddle/paddle in
...
@@ -256,14 +256,14 @@ supports AVX:</p>
...
@@ -256,14 +256,14 @@ supports AVX:</p>
</pre></div>
</pre></div>
</div>
</div>
<p>To run the CPU-only image as an interactive container:</p>
<p>To run the CPU-only image as an interactive container:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -it --rm paddlepaddle/paddle:0.10.0rc2 /bin/bash
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -it --rm paddlepaddle/paddle:0.10.0 /bin/bash
</pre></div>
</pre></div>
</div>
</div>
<p>Above method work with the GPU image too – the recommended way is
<p>Above method work with the GPU image too – the recommended way is
using <aclass="reference external"href="https://github.com/NVIDIA/nvidia-docker">nvidia-docker</a>.</p>
using <aclass="reference external"href="https://github.com/NVIDIA/nvidia-docker">nvidia-docker</a>.</p>
<p>Please install nvidia-docker first following this <aclass="reference external"href="https://github.com/NVIDIA/nvidia-docker#quick-start">tutorial</a>.</p>
<p>Please install nvidia-docker first following this <aclass="reference external"href="https://github.com/NVIDIA/nvidia-docker#quick-start">tutorial</a>.</p>
<p>Now you can run a GPU image:</p>
<p>Now you can run a GPU image:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0-gpu /bin/bash
</pre></div>
</pre></div>
</div>
</div>
</li>
</li>
...
@@ -304,11 +304,11 @@ programs. The typical workflow will be as follows:</p>
...
@@ -304,11 +304,11 @@ programs. The typical workflow will be as follows:</p>
</pre></div>
</pre></div>
</div>
</div>
<p>Run the program using docker:</p>
<p>Run the program using docker:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2 python /workspace/example.py
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0 python /workspace/example.py
</pre></div>
</pre></div>
</div>
</div>
<p>Or if you are using GPU for training:</p>
<p>Or if you are using GPU for training:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0rc2-gpu python /workspace/example.py
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run --rm -v ~/workspace:/workspace paddlepaddle/paddle:0.10.0-gpu python /workspace/example.py
</pre></div>
</pre></div>
</div>
</div>
<p>Above commands will start a docker container by running <codeclass="code docutils literal"><spanclass="pre">python</span>
<p>Above commands will start a docker container by running <codeclass="code docutils literal"><spanclass="pre">python</span>
...
@@ -316,14 +316,14 @@ programs. The typical workflow will be as follows:</p>
...
@@ -316,14 +316,14 @@ programs. The typical workflow will be as follows:</p>
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:<version>-gpu
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:0.10.0-gpu
</pre></div>
</pre></div>
</div>
</div>
</li>
</li>
...
@@ -267,7 +267,7 @@ docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span class=
...
@@ -267,7 +267,7 @@ docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span class=
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run -it -v <spanclass="nv">$PWD</span>:/work paddle /work/a.py
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>nvidia-docker run -it -v <spanclass="nv">$PWD</span>:/work paddle /work/a.py