提交 22e51819 编写于 作者: T Travis CI

Deploy to GitHub Pages: be8e3b57

上级 fa4c0087
......@@ -1035,7 +1035,7 @@ tuple.</p>
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>word_idx</strong> (<em>dict</em>) &#8211; word dictionary</li>
<li><strong>n</strong> (<em>int</em>) &#8211; sliding window size if type is ngram, otherwise max length of sequence</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em><em></em>) &#8211; data type (ngram or sequence)</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em>) &#8211; data type (ngram or sequence)</li>
</ul>
</td>
</tr>
......@@ -1062,7 +1062,7 @@ tuple.</p>
<tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple">
<li><strong>word_idx</strong> (<em>dict</em>) &#8211; word dictionary</li>
<li><strong>n</strong> (<em>int</em>) &#8211; sliding window size if type is ngram, otherwise max length of sequence</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em><em></em>) &#8211; data type (ngram or sequence)</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em>) &#8211; data type (ngram or sequence)</li>
</ul>
</td>
</tr>
......
......@@ -406,7 +406,7 @@ in the path of cost layer.</li>
<li><strong>reader</strong> (<em>collections.Iterable</em>) &#8211; A reader that reads and yeilds data items. Usually we use a
batched reader to do mini-batch training.</li>
<li><strong>num_passes</strong> &#8211; The total train passes.</li>
<li><strong>event_handler</strong> (<em></em><em>(</em><em>BaseEvent</em><em>) </em><em>=&gt; None</em>) &#8211; Event handler. A method will be invoked when event
<li><strong>event_handler</strong> (<em>(</em><em>BaseEvent</em><em>) </em><em>=&gt; None</em>) &#8211; Event handler. A method will be invoked when event
occurred.</li>
<li><strong>feeding</strong> (<em>dict|list</em>) &#8211; Feeding is a map of neural network input name and array
index that reader returns.</li>
......
......@@ -232,7 +232,19 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字
用户需要指定本机上Python的路径:``<exc_path>``, ``<lib_path>``, ``<inc_path>``
10. A protocol message was rejected because it was too big
11. CMake源码编译,Paddle版本号为0.0.0
--------------------------------------
如果运行 :code:`paddle version`, 出现 :code:`PaddlePaddle 0.0.0`;或者运行 :code:`cmake ..`,出现
.. code-block:: bash
CMake Warning at cmake/version.cmake:20 (message):
Cannot add paddle version from git tag
那么用户需要拉取所有的远程分支到本机,命令为 :code:`git fetch upstream`,然后重新cmake即可。
12. A protocol message was rejected because it was too big
----------------------------------------------------------
如果在训练NLP相关模型时,出现以下错误:
......@@ -270,7 +282,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字
完整源码可参考 `seqToseq <https://github.com/PaddlePaddle/Paddle/tree/develop/demo/seqToseq>`_ 示例。
11. 如何指定GPU设备
13. 如何指定GPU设备
-------------------
例如机器上有4块GPU,编号从0开始,指定使用2、3号GPU:
......@@ -288,7 +300,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字
paddle train --use_gpu=true --trainer_count=2 --gpu_id=2
12. 训练过程中出现 :code:`Floating point exception`, 训练因此退出怎么办?
14. 训练过程中出现 :code:`Floating point exception`, 训练因此退出怎么办?
------------------------------------------------------------------------
Paddle二进制在运行时捕获了浮点数异常,只要出现浮点数异常(即训练过程中出现NaN或者Inf),立刻退出。浮点异常通常的原因是浮点数溢出、除零等问题。
......
......@@ -1042,7 +1042,7 @@ tuple.</p>
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>word_idx</strong> (<em>dict</em>) &#8211; word dictionary</li>
<li><strong>n</strong> (<em>int</em>) &#8211; sliding window size if type is ngram, otherwise max length of sequence</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em><em></em>) &#8211; data type (ngram or sequence)</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em>) &#8211; data type (ngram or sequence)</li>
</ul>
</td>
</tr>
......@@ -1069,7 +1069,7 @@ tuple.</p>
<tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple">
<li><strong>word_idx</strong> (<em>dict</em>) &#8211; word dictionary</li>
<li><strong>n</strong> (<em>int</em>) &#8211; sliding window size if type is ngram, otherwise max length of sequence</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em><em></em>) &#8211; data type (ngram or sequence)</li>
<li><strong>data_type</strong> (<em>member variable of DataType</em><em> (</em><em>NGRAM</em><em> or </em><em>SEQ</em><em>)</em>) &#8211; data type (ngram or sequence)</li>
</ul>
</td>
</tr>
......
......@@ -413,7 +413,7 @@ in the path of cost layer.</li>
<li><strong>reader</strong> (<em>collections.Iterable</em>) &#8211; A reader that reads and yeilds data items. Usually we use a
batched reader to do mini-batch training.</li>
<li><strong>num_passes</strong> &#8211; The total train passes.</li>
<li><strong>event_handler</strong> (<em></em><em>(</em><em>BaseEvent</em><em>) </em><em>=&gt; None</em>) &#8211; Event handler. A method will be invoked when event
<li><strong>event_handler</strong> (<em>(</em><em>BaseEvent</em><em>) </em><em>=&gt; None</em>) &#8211; Event handler. A method will be invoked when event
occurred.</li>
<li><strong>feeding</strong> (<em>dict|list</em>) &#8211; Feeding is a map of neural network input name and array
index that reader returns.</li>
......
......@@ -210,9 +210,10 @@
<li><a class="reference internal" href="#python" id="id23">8. python相关的单元测试都过不了</a></li>
<li><a class="reference internal" href="#docker-gpu-cuda-driver-version-is-insufficient" id="id24">9. 运行Docker GPU镜像出现 &#8220;CUDA driver version is insufficient&#8221;</a></li>
<li><a class="reference internal" href="#cmake-pythonlibspythoninterp" id="id25">10. CMake源码编译, 找到的PythonLibs和PythonInterp版本不一致</a></li>
<li><a class="reference internal" href="#a-protocol-message-was-rejected-because-it-was-too-big" id="id26">10. A protocol message was rejected because it was too big</a></li>
<li><a class="reference internal" href="#gpu" id="id27">11. 如何指定GPU设备</a></li>
<li><a class="reference internal" href="#floating-point-exception" id="id28">12. 训练过程中出现 <code class="code docutils literal"><span class="pre">Floating</span> <span class="pre">point</span> <span class="pre">exception</span></code>, 训练因此退出怎么办?</a></li>
<li><a class="reference internal" href="#cmake-paddle0-0-0" id="id26">11. CMake源码编译,Paddle版本号为0.0.0</a></li>
<li><a class="reference internal" href="#a-protocol-message-was-rejected-because-it-was-too-big" id="id27">12. A protocol message was rejected because it was too big</a></li>
<li><a class="reference internal" href="#gpu" id="id28">13. 如何指定GPU设备</a></li>
<li><a class="reference internal" href="#floating-point-exception" id="id29">14. 训练过程中出现 <code class="code docutils literal"><span class="pre">Floating</span> <span class="pre">point</span> <span class="pre">exception</span></code>, 训练因此退出怎么办?</a></li>
</ul>
</li>
</ul>
......@@ -467,8 +468,17 @@ $ docker run <span class="si">${</span><span class="nv">CUDA_SO</span><span clas
</div></blockquote>
<p>用户需要指定本机上Python的路径:<code class="docutils literal"><span class="pre">&lt;exc_path&gt;</span></code>, <code class="docutils literal"><span class="pre">&lt;lib_path&gt;</span></code>, <code class="docutils literal"><span class="pre">&lt;inc_path&gt;</span></code></p>
</div>
<div class="section" id="cmake-paddle0-0-0">
<h2><a class="toc-backref" href="#id26">11. CMake源码编译,Paddle版本号为0.0.0</a><a class="headerlink" href="#cmake-paddle0-0-0" title="永久链接至标题"></a></h2>
<p>如果运行 <code class="code docutils literal"><span class="pre">paddle</span> <span class="pre">version</span></code>, 出现 <code class="code docutils literal"><span class="pre">PaddlePaddle</span> <span class="pre">0.0.0</span></code>;或者运行 <code class="code docutils literal"><span class="pre">cmake</span> <span class="pre">..</span></code>,出现</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span>CMake Warning at cmake/version.cmake:20 <span class="o">(</span>message<span class="o">)</span>:
Cannot add paddle version from git tag
</pre></div>
</div>
<p>那么用户需要拉取所有的远程分支到本机,命令为 <code class="code docutils literal"><span class="pre">git</span> <span class="pre">fetch</span> <span class="pre">upstream</span></code>,然后重新cmake即可。</p>
</div>
<div class="section" id="a-protocol-message-was-rejected-because-it-was-too-big">
<h2><a class="toc-backref" href="#id26">10. A protocol message was rejected because it was too big</a><a class="headerlink" href="#a-protocol-message-was-rejected-because-it-was-too-big" title="永久链接至标题"></a></h2>
<h2><a class="toc-backref" href="#id27">12. A protocol message was rejected because it was too big</a><a class="headerlink" href="#a-protocol-message-was-rejected-because-it-was-too-big" title="永久链接至标题"></a></h2>
<p>如果在训练NLP相关模型时,出现以下错误:</p>
<div class="highlight-bash"><div class="highlight"><pre><span></span><span class="o">[</span>libprotobuf ERROR google/protobuf/io/coded_stream.cc:171<span class="o">]</span> A protocol message was rejected because it was too big <span class="o">(</span>more than <span class="m">67108864</span> bytes<span class="o">)</span>. To increase the limit <span class="o">(</span>or to disable these warnings<span class="o">)</span>, see CodedInputStream::SetTotalBytesLimit<span class="o">()</span> in google/protobuf/io/coded_stream.h.
F1205 <span class="m">14</span>:59:50.295174 <span class="m">14703</span> TrainerConfigHelper.cpp:59<span class="o">]</span> Check failed: m-&gt;conf.ParseFromString<span class="o">(</span>configProtoStr<span class="o">)</span>
......@@ -499,7 +509,7 @@ F1205 <span class="m">14</span>:59:50.295174 <span class="m">14703</span> Traine
<p>完整源码可参考 <a class="reference external" href="https://github.com/PaddlePaddle/Paddle/tree/develop/demo/seqToseq">seqToseq</a> 示例。</p>
</div>
<div class="section" id="gpu">
<h2><a class="toc-backref" href="#id27">11. 如何指定GPU设备</a><a class="headerlink" href="#gpu" title="永久链接至标题"></a></h2>
<h2><a class="toc-backref" href="#id28">13. 如何指定GPU设备</a><a class="headerlink" href="#gpu" title="永久链接至标题"></a></h2>
<p>例如机器上有4块GPU,编号从0开始,指定使用2、3号GPU:</p>
<ul class="simple">
<li>方式1:通过 <a class="reference external" href="http://www.acceleware.com/blog/cudavisibledevices-masking-gpus">CUDA_VISIBLE_DEVICES</a> 环境变量来指定特定的GPU。</li>
......@@ -515,7 +525,7 @@ F1205 <span class="m">14</span>:59:50.295174 <span class="m">14703</span> Traine
</div>
</div>
<div class="section" id="floating-point-exception">
<h2><a class="toc-backref" href="#id28">12. 训练过程中出现 <code class="code docutils literal"><span class="pre">Floating</span> <span class="pre">point</span> <span class="pre">exception</span></code>, 训练因此退出怎么办?</a><a class="headerlink" href="#floating-point-exception" title="永久链接至标题"></a></h2>
<h2><a class="toc-backref" href="#id29">14. 训练过程中出现 <code class="code docutils literal"><span class="pre">Floating</span> <span class="pre">point</span> <span class="pre">exception</span></code>, 训练因此退出怎么办?</a><a class="headerlink" href="#floating-point-exception" title="永久链接至标题"></a></h2>
<p>Paddle二进制在运行时捕获了浮点数异常,只要出现浮点数异常(即训练过程中出现NaN或者Inf),立刻退出。浮点异常通常的原因是浮点数溢出、除零等问题。
主要原因包括两个方面:</p>
<ul class="simple">
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册