docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:gpu-latest
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:0.10.0rc1-gpu
</pre></div>
</div>
</div>
...
...
@@ -359,7 +357,7 @@ for users to browse and understand the C++ source code.</p>
<p>As long as we give the Paddle Docker container a name, we can run an
additional Nginx Docker container to serve the volume from the Paddle
container:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -d --name paddle-cpu-doc paddle:cpu
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>docker run -d --name paddle-cpu-doc paddle:0.10.0rc1-cpu
docker run -d --volumes-from paddle-cpu-doc -p <spanclass="m">8088</span>:80 nginx
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:gpu-latest
docker run <spanclass="si">${</span><spanclass="nv">CUDA_SO</span><spanclass="si">}</span><spanclass="si">${</span><spanclass="nv">DEVICES</span><spanclass="si">}</span> -it paddledev/paddle:0.10.0rc1-gpu