提交 93693194 编写于 作者: T Travis CI

Deploy to GitHub Pages: 0f868819

上级 3114d1e0
...@@ -68,7 +68,7 @@ As a simple example, consider the following: ...@@ -68,7 +68,7 @@ As a simple example, consider the following:
1. **BLAS Dependencies(optional)** 1. **BLAS Dependencies(optional)**
CMake will search BLAS libraries from system. If not found, OpenBLAS will be downloaded, built and installed automatically. CMake will search BLAS libraries from the system. If not found, OpenBLAS will be downloaded, built and installed automatically.
To utilize preinstalled BLAS, you can simply specify MKL, OpenBLAS or ATLAS via `MKL_ROOT`, `OPENBLAS_ROOT` or `ATLAS_ROOT`. To utilize preinstalled BLAS, you can simply specify MKL, OpenBLAS or ATLAS via `MKL_ROOT`, `OPENBLAS_ROOT` or `ATLAS_ROOT`.
```bash ```bash
...@@ -131,9 +131,9 @@ As a simple example, consider the following: ...@@ -131,9 +131,9 @@ As a simple example, consider the following:
To build GPU version, you will need the following installed: To build GPU version, you will need the following installed:
1. a CUDA-capable GPU 1. a CUDA-capable GPU
2. A supported version of Linux with a gcc compiler and toolchain 2. A supported version of Linux with a GCC compiler and toolchain
3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads) 3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)
4. NVIDIA cuDNN Library (availabel at https://developer.nvidia.com/cudnn) 4. NVIDIA cuDNN Library (available at https://developer.nvidia.com/cudnn)
The CUDA development environment relies on tight integration with the host development environment, The CUDA development environment relies on tight integration with the host development environment,
including the host compiler and C runtime libraries, and is therefore only supported on including the host compiler and C runtime libraries, and is therefore only supported on
...@@ -172,6 +172,7 @@ export PATH=<path to install>/bin:$PATH ...@@ -172,6 +172,7 @@ export PATH=<path to install>/bin:$PATH
# install PaddlePaddle Python modules. # install PaddlePaddle Python modules.
sudo pip install <path to install>/opt/paddle/share/wheels/*.whl sudo pip install <path to install>/opt/paddle/share/wheels/*.whl
``` ```
## <span id="centos">Build on Centos 7</span> ## <span id="centos">Build on Centos 7</span>
### Install Dependencies ### Install Dependencies
...@@ -192,9 +193,9 @@ sudo pip install <path to install>/opt/paddle/share/wheels/*.whl ...@@ -192,9 +193,9 @@ sudo pip install <path to install>/opt/paddle/share/wheels/*.whl
To build GPU version, you will need the following installed: To build GPU version, you will need the following installed:
1. a CUDA-capable GPU 1. a CUDA-capable GPU
2. A supported version of Linux with a gcc compiler and toolchain 2. A supported version of Linux with a GCC compiler and toolchain
3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads) 3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)
4. NVIDIA cuDNN Library (availabel at https://developer.nvidia.com/cudnn) 4. NVIDIA cuDNN Library (available at https://developer.nvidia.com/cudnn)
The CUDA development environment relies on tight integration with the host development environment, The CUDA development environment relies on tight integration with the host development environment,
including the host compiler and C runtime libraries, and is therefore only supported on including the host compiler and C runtime libraries, and is therefore only supported on
...@@ -222,7 +223,7 @@ mkdir build && cd build ...@@ -222,7 +223,7 @@ mkdir build && cd build
``` ```
Finally, you can build and install PaddlePaddle: Finally, you can build and install PaddlePaddle:
```bash ```bash
# you can add build option here, such as: # you can add build option here, such as:
cmake3 .. -DCMAKE_INSTALL_PREFIX=<path to install> cmake3 .. -DCMAKE_INSTALL_PREFIX=<path to install>
......
...@@ -252,7 +252,7 @@ For CUDA 8.0, GCC versions later than 5.3 are not supported!</p> ...@@ -252,7 +252,7 @@ For CUDA 8.0, GCC versions later than 5.3 are not supported!</p>
<p>As a simple example, consider the following:</p> <p>As a simple example, consider the following:</p>
<ol> <ol>
<li><p class="first"><strong>BLAS Dependencies(optional)</strong></p> <li><p class="first"><strong>BLAS Dependencies(optional)</strong></p>
<p>CMake will search BLAS libraries from system. If not found, OpenBLAS will be downloaded, built and installed automatically. <p>CMake will search BLAS libraries from the system. If not found, OpenBLAS will be downloaded, built and installed automatically.
To utilize preinstalled BLAS, you can simply specify MKL, OpenBLAS or ATLAS via <code class="docutils literal"><span class="pre">MKL_ROOT</span></code>, <code class="docutils literal"><span class="pre">OPENBLAS_ROOT</span></code> or <code class="docutils literal"><span class="pre">ATLAS_ROOT</span></code>.</p> To utilize preinstalled BLAS, you can simply specify MKL, OpenBLAS or ATLAS via <code class="docutils literal"><span class="pre">MKL_ROOT</span></code>, <code class="docutils literal"><span class="pre">OPENBLAS_ROOT</span></code> or <code class="docutils literal"><span class="pre">ATLAS_ROOT</span></code>.</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># specify MKL</span> <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># specify MKL</span>
cmake .. -DMKL_ROOT<span class="o">=</span>&lt;mkl_path&gt; cmake .. -DMKL_ROOT<span class="o">=</span>&lt;mkl_path&gt;
...@@ -313,9 +313,9 @@ curl -sSL https://cmake.org/files/v3.4/cmake-3.4.1.tar.gz <span class="p">|</spa ...@@ -313,9 +313,9 @@ curl -sSL https://cmake.org/files/v3.4/cmake-3.4.1.tar.gz <span class="p">|</spa
<li><p class="first"><strong>GPU Dependencies (optional)</strong></p> <li><p class="first"><strong>GPU Dependencies (optional)</strong></p>
<p>To build GPU version, you will need the following installed:</p> <p>To build GPU version, you will need the following installed:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="mf">1.</span> <span class="n">a</span> <span class="n">CUDA</span><span class="o">-</span><span class="n">capable</span> <span class="n">GPU</span> <div class="highlight-default"><div class="highlight"><pre><span></span> <span class="mf">1.</span> <span class="n">a</span> <span class="n">CUDA</span><span class="o">-</span><span class="n">capable</span> <span class="n">GPU</span>
<span class="mf">2.</span> <span class="n">A</span> <span class="n">supported</span> <span class="n">version</span> <span class="n">of</span> <span class="n">Linux</span> <span class="k">with</span> <span class="n">a</span> <span class="n">gcc</span> <span class="n">compiler</span> <span class="ow">and</span> <span class="n">toolchain</span> <span class="mf">2.</span> <span class="n">A</span> <span class="n">supported</span> <span class="n">version</span> <span class="n">of</span> <span class="n">Linux</span> <span class="k">with</span> <span class="n">a</span> <span class="n">GCC</span> <span class="n">compiler</span> <span class="ow">and</span> <span class="n">toolchain</span>
<span class="mf">3.</span> <span class="n">NVIDIA</span> <span class="n">CUDA</span> <span class="n">Toolkit</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cuda</span><span class="o">-</span><span class="n">downloads</span><span class="p">)</span> <span class="mf">3.</span> <span class="n">NVIDIA</span> <span class="n">CUDA</span> <span class="n">Toolkit</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cuda</span><span class="o">-</span><span class="n">downloads</span><span class="p">)</span>
<span class="mf">4.</span> <span class="n">NVIDIA</span> <span class="n">cuDNN</span> <span class="n">Library</span> <span class="p">(</span><span class="n">availabel</span> <span class="n">at</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cudnn</span><span class="p">)</span> <span class="mf">4.</span> <span class="n">NVIDIA</span> <span class="n">cuDNN</span> <span class="n">Library</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cudnn</span><span class="p">)</span>
</pre></div> </pre></div>
</div> </div>
<p>The CUDA development environment relies on tight integration with the host development environment, <p>The CUDA development environment relies on tight integration with the host development environment,
...@@ -371,9 +371,9 @@ sudo pip install <span class="s1">&#39;protobuf&gt;=3.0.0&#39;</span> ...@@ -371,9 +371,9 @@ sudo pip install <span class="s1">&#39;protobuf&gt;=3.0.0&#39;</span>
<li><p class="first"><strong>GPU Dependencies (optional)</strong></p> <li><p class="first"><strong>GPU Dependencies (optional)</strong></p>
<p>To build GPU version, you will need the following installed:</p> <p>To build GPU version, you will need the following installed:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span> <span class="mf">1.</span> <span class="n">a</span> <span class="n">CUDA</span><span class="o">-</span><span class="n">capable</span> <span class="n">GPU</span> <div class="highlight-default"><div class="highlight"><pre><span></span> <span class="mf">1.</span> <span class="n">a</span> <span class="n">CUDA</span><span class="o">-</span><span class="n">capable</span> <span class="n">GPU</span>
<span class="mf">2.</span> <span class="n">A</span> <span class="n">supported</span> <span class="n">version</span> <span class="n">of</span> <span class="n">Linux</span> <span class="k">with</span> <span class="n">a</span> <span class="n">gcc</span> <span class="n">compiler</span> <span class="ow">and</span> <span class="n">toolchain</span> <span class="mf">2.</span> <span class="n">A</span> <span class="n">supported</span> <span class="n">version</span> <span class="n">of</span> <span class="n">Linux</span> <span class="k">with</span> <span class="n">a</span> <span class="n">GCC</span> <span class="n">compiler</span> <span class="ow">and</span> <span class="n">toolchain</span>
<span class="mf">3.</span> <span class="n">NVIDIA</span> <span class="n">CUDA</span> <span class="n">Toolkit</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cuda</span><span class="o">-</span><span class="n">downloads</span><span class="p">)</span> <span class="mf">3.</span> <span class="n">NVIDIA</span> <span class="n">CUDA</span> <span class="n">Toolkit</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cuda</span><span class="o">-</span><span class="n">downloads</span><span class="p">)</span>
<span class="mf">4.</span> <span class="n">NVIDIA</span> <span class="n">cuDNN</span> <span class="n">Library</span> <span class="p">(</span><span class="n">availabel</span> <span class="n">at</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cudnn</span><span class="p">)</span> <span class="mf">4.</span> <span class="n">NVIDIA</span> <span class="n">cuDNN</span> <span class="n">Library</span> <span class="p">(</span><span class="n">available</span> <span class="n">at</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">developer</span><span class="o">.</span><span class="n">nvidia</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">cudnn</span><span class="p">)</span>
</pre></div> </pre></div>
</div> </div>
<p>The CUDA development environment relies on tight integration with the host development environment, <p>The CUDA development environment relies on tight integration with the host development environment,
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册