提交 fa597b30 编写于 作者: T Travis CI

Deploy to GitHub Pages: 0f8dd956

上级 51acd478
......@@ -95,11 +95,11 @@ paddle.init(
Parameter Description
- use_gpu: **optional, default False**, set to "True" to enable GPU training.
- trainer_count: **required, default 1**, total count of trainers in the training job.
- trainer_count: **required, default 1**, number of threads in current trainer.
- port: **required, default 7164**, port to connect to parameter server.
- ports_num: **required, default 1**, number of ports for communication.
- ports_num_for_sparse: **required, default 0**, number of ports for sparse type caculation.
- num_gradient_servers: **required, default 1**, total number of gradient server.
- num_gradient_servers: **required, default 1**, number of trainers in current job.
- trainer_id: **required, default 0**, ID for every trainer, start from 0.
- pservers: **required, default 127.0.0.1**, list of IPs of parameter servers, separated by ",".
......
......@@ -301,11 +301,11 @@ python train.py
<p>Parameter Description</p>
<ul class="simple">
<li>use_gpu: <strong>optional, default False</strong>, set to &#8220;True&#8221; to enable GPU training.</li>
<li>trainer_count: <strong>required, default 1</strong>, total count of trainers in the training job.</li>
<li>trainer_count: <strong>required, default 1</strong>, number of threads in current trainer.</li>
<li>port: <strong>required, default 7164</strong>, port to connect to parameter server.</li>
<li>ports_num: <strong>required, default 1</strong>, number of ports for communication.</li>
<li>ports_num_for_sparse: <strong>required, default 0</strong>, number of ports for sparse type caculation.</li>
<li>num_gradient_servers: <strong>required, default 1</strong>, total number of gradient server.</li>
<li>num_gradient_servers: <strong>required, default 1</strong>, number of trainers in current job.</li>
<li>trainer_id: <strong>required, default 0</strong>, ID for every trainer, start from 0.</li>
<li>pservers: <strong>required, default 127.0.0.1</strong>, list of IPs of parameter servers, separated by &#8221;,&#8221;.</li>
</ul>
......
因为 它太大了无法显示 source diff 。你可以改为 查看blob
......@@ -92,11 +92,11 @@ paddle.init(
参数说明
- use_gpu: **可选,默认False**,是否启用GPU训练
- trainer_count:**必选,默认1**,当前训练任务trainer总个数
- trainer_count:**必选,默认1**,当前trainer的线程数目
- port:**必选,默认7164**,连接到pserver的端口
- ports_num:**必选,默认1**,连接到pserver的端口个数
- ports_num_for_sparse:**必选,默认0**,和pserver之间用于稀疏类型参数通信的端口个数
- num_gradient_servers:**必选,默认1**,当前训练任务pserver总数
- num_gradient_servers:**必选,默认1**,当前训练任务trainer总数
- trainer_id:**必选,默认0**,每个trainer的唯一ID,从0开始的整数
- pservers:**必选,默认127.0.0.1**,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开
......
......@@ -319,11 +319,11 @@ PaddlePaddle <span class="m">0</span>.10.0, compiled with
<p>参数说明</p>
<ul class="simple">
<li>use_gpu: <strong>可选,默认False</strong>,是否启用GPU训练</li>
<li>trainer_count:<strong>必选,默认1</strong>,当前训练任务trainer总个数</li>
<li>trainer_count:<strong>必选,默认1</strong>,当前trainer的线程数目</li>
<li>port:<strong>必选,默认7164</strong>,连接到pserver的端口</li>
<li>ports_num:<strong>必选,默认1</strong>,连接到pserver的端口个数</li>
<li>ports_num_for_sparse:<strong>必选,默认0</strong>,和pserver之间用于稀疏类型参数通信的端口个数</li>
<li>num_gradient_servers:<strong>必选,默认1</strong>,当前训练任务pserver总数</li>
<li>num_gradient_servers:<strong>必选,默认1</strong>,当前训练任务trainer总数</li>
<li>trainer_id:<strong>必选,默认0</strong>,每个trainer的唯一ID,从0开始的整数</li>
<li>pservers:<strong>必选,默认127.0.0.1</strong>,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开</li>
</ul>
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册