diff --git a/develop/doc/_sources/howto/usage/cluster/cluster_train_en.md.txt b/develop/doc/_sources/howto/usage/cluster/cluster_train_en.md.txt index 915405ca5b446981515e301ca4b7ee065a82a9ff..28cd1fa7903e559e33a7fc2f00172fdfbe2fdc97 100644 --- a/develop/doc/_sources/howto/usage/cluster/cluster_train_en.md.txt +++ b/develop/doc/_sources/howto/usage/cluster/cluster_train_en.md.txt @@ -52,7 +52,7 @@ Parameter Description - port: **required, default 7164**, port which parameter server will listen on. If ports_num greater than 1, parameter server will listen on multiple ports for more network throughput. - ports_num: **required, default 1**, total number of ports will listen on. -- ports_num_for_sparse: **required, default 1**, number of ports which serves sparse parameter update. +- ports_num_for_sparse: **required, default 0**, number of ports which serves sparse parameter update. - num_gradient_servers: **required, default 1**, total number of gradient servers. ### Starting trainer @@ -98,7 +98,7 @@ Parameter Description - trainer_count: **required, default 1**, total count of trainers in the training job. - port: **required, default 7164**, port to connect to parameter server. - ports_num: **required, default 1**, number of ports for communication. -- ports_num_for_sparse: **required, default 1**, number of ports for sparse type caculation. +- ports_num_for_sparse: **required, default 0**, number of ports for sparse type caculation. - num_gradient_servers: **required, default 1**, total number of gradient server. - trainer_id: **required, default 0**, ID for every trainer, start from 0. - pservers: **required, default 127.0.0.1**, list of IPs of parameter servers, separated by ",". diff --git a/develop/doc/howto/usage/cluster/cluster_train_en.html b/develop/doc/howto/usage/cluster/cluster_train_en.html index e65bce1e95c7775ed197bc0d37c4e619abd768b7..6a9e68b486a7ad155f1dbaf01b0bab09f4a83af6 100644 --- a/develop/doc/howto/usage/cluster/cluster_train_en.html +++ b/develop/doc/howto/usage/cluster/cluster_train_en.html @@ -262,7 +262,7 @@ PaddlePaddle 0.10.0rc, compiled with @@ -303,7 +303,7 @@ python train.py
  • trainer_count: required, default 1, total count of trainers in the training job.
  • port: required, default 7164, port to connect to parameter server.
  • ports_num: required, default 1, number of ports for communication.
  • -
  • ports_num_for_sparse: required, default 1, number of ports for sparse type caculation.
  • +
  • ports_num_for_sparse: required, default 0, number of ports for sparse type caculation.
  • num_gradient_servers: required, default 1, total number of gradient server.
  • trainer_id: required, default 0, ID for every trainer, start from 0.
  • pservers: required, default 127.0.0.1, list of IPs of parameter servers, separated by ”,”.
  • diff --git a/develop/doc_cn/_sources/howto/usage/cluster/cluster_train_cn.md.txt b/develop/doc_cn/_sources/howto/usage/cluster/cluster_train_cn.md.txt index 9328b4cca45f77516e45d2f6e0bd7cb6873aa95d..c2fc86687d7106aac7c74d6dd16bc229353cb7c1 100644 --- a/develop/doc_cn/_sources/howto/usage/cluster/cluster_train_cn.md.txt +++ b/develop/doc_cn/_sources/howto/usage/cluster/cluster_train_cn.md.txt @@ -51,7 +51,7 @@ $ stdbuf -oL /usr/bin/nohup paddle pserver --port=7164 --ports_num=1 --ports_num - port:**必选,默认7164**,pserver监听的起始端口,根据ports_num决定总端口个数,从起始端口监听多个端口用于通信 - ports_num:**必选,默认1**,监听的端口个数 -- ports_num_for_sparse:**必选,默认1**,用于稀疏类型参数通信的端口个数 +- ports_num_for_sparse:**必选,默认0**,用于稀疏类型参数通信的端口个数 - num_gradient_servers:**必选,默认1**,当前训练任务pserver总数 ### 启动计算节点 @@ -95,7 +95,7 @@ paddle.init( - trainer_count:**必选,默认1**,当前训练任务trainer总个数 - port:**必选,默认7164**,连接到pserver的端口 - ports_num:**必选,默认1**,连接到pserver的端口个数 -- ports_num_for_sparse:**必选,默认1**,和pserver之间用于稀疏类型参数通信的端口个数 +- ports_num_for_sparse:**必选,默认0**,和pserver之间用于稀疏类型参数通信的端口个数 - num_gradient_servers:**必选,默认1**,当前训练任务pserver总数 - trainer_id:**必选,默认0**,每个trainer的唯一ID,从0开始的整数 - pservers:**必选,默认127.0.0.1**,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开 diff --git a/develop/doc_cn/howto/usage/cluster/cluster_train_cn.html b/develop/doc_cn/howto/usage/cluster/cluster_train_cn.html index 36e61c712ba628f2e445fc25943f77fcd2d79e2f..bb31a7c310788760a12a42e6b35f9948722c5e2a 100644 --- a/develop/doc_cn/howto/usage/cluster/cluster_train_cn.html +++ b/develop/doc_cn/howto/usage/cluster/cluster_train_cn.html @@ -275,7 +275,7 @@ PaddlePaddle 0.10.0, compiled with @@ -315,7 +315,7 @@ PaddlePaddle 0.10.0, compiled with
  • trainer_count:必选,默认1,当前训练任务trainer总个数
  • port:必选,默认7164,连接到pserver的端口
  • ports_num:必选,默认1,连接到pserver的端口个数
  • -
  • ports_num_for_sparse:必选,默认1,和pserver之间用于稀疏类型参数通信的端口个数
  • +
  • ports_num_for_sparse:必选,默认0,和pserver之间用于稀疏类型参数通信的端口个数
  • num_gradient_servers:必选,默认1,当前训练任务pserver总数
  • trainer_id:必选,默认0,每个trainer的唯一ID,从0开始的整数
  • pservers:必选,默认127.0.0.1,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开