<li><aclass="reference internal"href="#paddlepaddle-whl-is-not-a-supported-wheel-on-this-platform"id="id26">7. paddlepaddle*.whl is not a supported wheel on this platform.</a></li>
<li><aclass="reference internal"href="#docker-gpu-cuda-driver-version-is-insufficient"id="id28">9. 运行Docker GPU镜像出现 “CUDA driver version is insufficient”</a></li>
<li><aclass="reference internal"href="#a-protocol-message-was-rejected-because-it-was-too-big"id="id31">12. A protocol message was rejected because it was too big</a></li>
<li><aclass="reference internal"href="#import-paddle-v2-as-paddle-importerror-no-module-named-v2"id="id34">15. 编译安装后执行 import paddle.v2 as paddle 报ImportError: No module named v2</a></li>
<li><aclass="reference internal"href="#paddlepaddle-whl-is-not-a-supported-wheel-on-this-platform"id="id25">7. paddlepaddle*.whl is not a supported wheel on this platform.</a></li>
<li><aclass="reference internal"href="#docker-gpu-cuda-driver-version-is-insufficient"id="id27">9. 运行Docker GPU镜像出现 “CUDA driver version is insufficient”</a></li>
<li><aclass="reference internal"href="#a-protocol-message-was-rejected-because-it-was-too-big"id="id30">12. A protocol message was rejected because it was too big</a></li>
<li><aclass="reference internal"href="#import-paddle-v2-as-paddle-importerror-no-module-named-v2"id="id33">15. 编译安装后执行 import paddle.v2 as paddle 报ImportError: No module named v2</a></li>
<h2><aclass="toc-backref"href="#id26">7. paddlepaddle*.whl is not a supported wheel on this platform.</a><aclass="headerlink"href="#paddlepaddle-whl-is-not-a-supported-wheel-on-this-platform"title="永久链接至标题">¶</a></h2>
<h2><aclass="toc-backref"href="#id25">7. paddlepaddle*.whl is not a supported wheel on this platform.</a><aclass="headerlink"href="#paddlepaddle-whl-is-not-a-supported-wheel-on-this-platform"title="永久链接至标题">¶</a></h2>
<h2><aclass="toc-backref"href="#id28">9. 运行Docker GPU镜像出现 “CUDA driver version is insufficient”</a><aclass="headerlink"href="#docker-gpu-cuda-driver-version-is-insufficient"title="永久链接至标题">¶</a></h2>
<h2><aclass="toc-backref"href="#id27">9. 运行Docker GPU镜像出现 “CUDA driver version is insufficient”</a><aclass="headerlink"href="#docker-gpu-cuda-driver-version-is-insufficient"title="永久链接至标题">¶</a></h2>
<p>用户在使用PaddlePaddle GPU的Docker镜像的时候,常常出现 <cite>Cuda Error: CUDA driver version is insufficient for CUDA runtime version</cite>, 原因在于没有把机器上CUDA相关的驱动和库映射到容器内部。
<divclass="highlight-bash"><divclass="highlight"><pre><span></span>CMake Warning at cmake/version.cmake:20 <spanclass="o">(</span>message<spanclass="o">)</span>:
<h2><aclass="toc-backref"href="#id31">12. A protocol message was rejected because it was too big</a><aclass="headerlink"href="#a-protocol-message-was-rejected-because-it-was-too-big"title="永久链接至标题">¶</a></h2>
<h2><aclass="toc-backref"href="#id30">12. A protocol message was rejected because it was too big</a><aclass="headerlink"href="#a-protocol-message-was-rejected-because-it-was-too-big"title="永久链接至标题">¶</a></h2>
<p>如果在训练NLP相关模型时,出现以下错误:</p>
<divclass="highlight-bash"><divclass="highlight"><pre><span></span><spanclass="o">[</span>libprotobuf ERROR google/protobuf/io/coded_stream.cc:171<spanclass="o">]</span> A protocol message was rejected because it was too big <spanclass="o">(</span>more than <spanclass="m">67108864</span> bytes<spanclass="o">)</span>. To increase the limit <spanclass="o">(</span>or to disable these warnings<spanclass="o">)</span>, see CodedInputStream::SetTotalBytesLimit<spanclass="o">()</span> in google/protobuf/io/coded_stream.h.
<h2><aclass="toc-backref"href="#id34">15. 编译安装后执行 import paddle.v2 as paddle 报ImportError: No module named v2</a><aclass="headerlink"href="#import-paddle-v2-as-paddle-importerror-no-module-named-v2"title="永久链接至标题">¶</a></h2>
<h2><aclass="toc-backref"href="#id33">15. 编译安装后执行 import paddle.v2 as paddle 报ImportError: No module named v2</a><aclass="headerlink"href="#import-paddle-v2-as-paddle-importerror-no-module-named-v2"title="永久链接至标题">¶</a></h2>