From ada82a3e24cd8205c61940f3567ab1324ad36aea Mon Sep 17 00:00:00 2001 From: Yancey Date: Tue, 13 Mar 2018 10:46:32 +0800 Subject: [PATCH] Add is_local paramter description (#8893) * add is_local paramter description * update * update by comment --- doc/v2/howto/cluster/cmd_argument_cn.md | 7 +++++++ doc/v2/howto/cluster/cmd_argument_en.md | 8 ++++++++ 2 files changed, 15 insertions(+) diff --git a/doc/v2/howto/cluster/cmd_argument_cn.md b/doc/v2/howto/cluster/cmd_argument_cn.md index 40e1dde485..c0ba093cbf 100644 --- a/doc/v2/howto/cluster/cmd_argument_cn.md +++ b/doc/v2/howto/cluster/cmd_argument_cn.md @@ -71,6 +71,13 @@ paddle.init( - trainer_id:**必选,默认0**,每个trainer的唯一ID,从0开始的整数 - pservers:**必选,默认127.0.0.1**,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开 +```python +trainer = paddle.trainer.SGD(..., is_local=False) +``` + +参数说明 + +- is_local: **必选, 默认True**, 是否使用PServer更新参数 ## 准备数据集 diff --git a/doc/v2/howto/cluster/cmd_argument_en.md b/doc/v2/howto/cluster/cmd_argument_en.md index 40179c28f8..df1381a00f 100644 --- a/doc/v2/howto/cluster/cmd_argument_en.md +++ b/doc/v2/howto/cluster/cmd_argument_en.md @@ -73,6 +73,14 @@ Parameter Description - trainer_id: **required, default 0**, ID for every trainer, start from 0. - pservers: **required, default 127.0.0.1**, list of IPs of parameter servers, separated by ",". +```python +trainer = paddle.trainer.SGD(..., is_local=False) +``` + +Parameter Description + +- is_local: **required, default True**, whether update parameters by PServer. + ## Prepare Training Dataset Here's some example code [prepare.py](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/src/word2vec/prepare.py), it will download public `imikolov` dataset and split it into multiple files according to job parallelism(trainers count). Modify `SPLIT_COUNT` at the begining of `prepare.py` to change the count of output files. -- GitLab