diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index 9328b4cca45f77516e45d2f6e0bd7cb6873aa95d..c2fc86687d7106aac7c74d6dd16bc229353cb7c1 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -51,7 +51,7 @@ $ stdbuf -oL /usr/bin/nohup paddle pserver --port=7164 --ports_num=1 --ports_num - port:**必选,默认7164**,pserver监听的起始端口,根据ports_num决定总端口个数,从起始端口监听多个端口用于通信 - ports_num:**必选,默认1**,监听的端口个数 -- ports_num_for_sparse:**必选,默认1**,用于稀疏类型参数通信的端口个数 +- ports_num_for_sparse:**必选,默认0**,用于稀疏类型参数通信的端口个数 - num_gradient_servers:**必选,默认1**,当前训练任务pserver总数 ### 启动计算节点 @@ -95,7 +95,7 @@ paddle.init( - trainer_count:**必选,默认1**,当前训练任务trainer总个数 - port:**必选,默认7164**,连接到pserver的端口 - ports_num:**必选,默认1**,连接到pserver的端口个数 -- ports_num_for_sparse:**必选,默认1**,和pserver之间用于稀疏类型参数通信的端口个数 +- ports_num_for_sparse:**必选,默认0**,和pserver之间用于稀疏类型参数通信的端口个数 - num_gradient_servers:**必选,默认1**,当前训练任务pserver总数 - trainer_id:**必选,默认0**,每个trainer的唯一ID,从0开始的整数 - pservers:**必选,默认127.0.0.1**,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开 diff --git a/doc/howto/usage/cluster/cluster_train_en.md b/doc/howto/usage/cluster/cluster_train_en.md index 915405ca5b446981515e301ca4b7ee065a82a9ff..28cd1fa7903e559e33a7fc2f00172fdfbe2fdc97 100644 --- a/doc/howto/usage/cluster/cluster_train_en.md +++ b/doc/howto/usage/cluster/cluster_train_en.md @@ -52,7 +52,7 @@ Parameter Description - port: **required, default 7164**, port which parameter server will listen on. If ports_num greater than 1, parameter server will listen on multiple ports for more network throughput. - ports_num: **required, default 1**, total number of ports will listen on. -- ports_num_for_sparse: **required, default 1**, number of ports which serves sparse parameter update. +- ports_num_for_sparse: **required, default 0**, number of ports which serves sparse parameter update. - num_gradient_servers: **required, default 1**, total number of gradient servers. ### Starting trainer @@ -98,7 +98,7 @@ Parameter Description - trainer_count: **required, default 1**, total count of trainers in the training job. - port: **required, default 7164**, port to connect to parameter server. - ports_num: **required, default 1**, number of ports for communication. -- ports_num_for_sparse: **required, default 1**, number of ports for sparse type caculation. +- ports_num_for_sparse: **required, default 0**, number of ports for sparse type caculation. - num_gradient_servers: **required, default 1**, total number of gradient server. - trainer_id: **required, default 0**, ID for every trainer, start from 0. - pservers: **required, default 127.0.0.1**, list of IPs of parameter servers, separated by ",".