未验证 提交 11700e23 编写于 作者: T tangwei12 提交者: GitHub

【paddle.distributed】 add Fleet/RoleMaker API doc (#2661)

* link fix
上级 a03d2256
......@@ -204,8 +204,8 @@ save_vars、save_params、save_persistables 以及 save_inference_model的区别
path = "./models"
startup_prog = fluid.default_startup_program()
exe.run(startup_prog)
fluid.io.load_persistables(exe, path, startup_prog)
main_prog = fluid.default_main_program()
fluid.io.load_persistables(exe, path, main_prog)
exe.run(main_prog)
上面的例子中,通过调用 :code:`fluid.io.load_persistables` 函数,PaddlePaddle Fluid会从默认
......@@ -270,12 +270,12 @@ save_vars、save_params、save_persistables 以及 save_inference_model的区别
pserver_endpoints = "127.0.0.1:1001,127.0.0.1:1002"
trainers = 4
training_role == "PSERVER"
current_endpoint = "127.0.0.1:1002"
config = fluid.DistributeTranspilerConfig()
t = fluid.DistributeTranspiler(config=config)
t.transpile(trainer_id, pservers=pserver_endpoints, trainers=trainers, sync_mode=True, current_endpoint=current_endpoint)
if training_role == "PSERVER":
current_endpoint = "127.0.0.1:1001"
pserver_prog = t.get_pserver_program(current_endpoint)
pserver_startup = t.get_startup_program(current_endpoint, pserver_prog)
......@@ -284,7 +284,7 @@ save_vars、save_params、save_persistables 以及 save_inference_model的区别
exe.run(pserver_prog)
if training_role == "TRAINER":
main_program = t.get_trainer_program()
exe.run(main_program)
exe.run(main_program)
上面的例子中,每个PServer通过调用HDFS的命令获取到0号trainer保存的参数,通过配置获取到PServer的 :code:`fluid.Program` ,PaddlePaddle Fluid会从此
:code:`fluid.Program` 也就是 :code:`pserver_startup` 的所有模型变量中找出长期变量,并通过指定的 :code:`path` 目录下一一加载。
......
......@@ -27,23 +27,6 @@ How to save model variables
The model variables we need to save are different depending on the application. For example, if we just want to save the model for future predictions, just saving the model parameters will be enough. But if we need to save a checkpoint for future recovery of current training, then we should save all the persistable variables, and even record the current epoch and step id. It is because even though some model variables are not parameters, they are still essential for model training.
Difference between save_vars、save_params、save_persistables and save_inference_model
##########################################################################
1. :code:`save_inference_model` will prune the inference model based on :code:`feeded_var_names` and :code:`target_vars` , this method will save the ``__model__`` file of the pruned program and the persistable variables in the program.
2. :code:`save_persistables` this method will not save model, it will save all the persistable variables in the program.
3. :code:`save_params` this method will not save model, it will save all the parameters in the program.
4. :code:`save_vars` this method will not save model, it will save the given parameter by user.
:code:`save_persistables` this method is useful for increment training or checkpoint training, it can save persistable variables in program comprehensively, such as parameter variables, optimizer variables, if you need increment training or checkpoint training, please choose this one.
:code:`save_inference_model` this method is useful for inference, it will save persistable variables and pruned program, if you need program and variables for follow-up high performance inference, please choose this one.
:code:`save_vars 和 save_params` there methods are only needed in particular cases, we suppose you already know the purpose of there APIs, there are not recommended for use normally.
Save the model to make prediction for new samples
===================================================
......@@ -143,8 +126,8 @@ In the above example, by calling the :code:`fluid.io.save_persistables` function
path = "./models"
startup_prog = fluid.default_startup_program()
exe.run(startup_prog)
fluid.io.load_persistables(exe, path, startup_prog)
main_prog = fluid.default_main_program()
fluid.io.load_persistables(exe, path, main_prog)
exe.run(main_prog)
In the above example, by calling the :code:`fluid.io.load_persistables` function, PaddlePaddle Fluid will find persistable variables from all model variables in the default :code:`fluid.Program` , e.t. :code:`prog` . and load them one by one from the specified :code:`path` directory to continue training.
......@@ -196,23 +179,23 @@ For the PServer to be loaded with parameters during training, for example:
exe = fluid.Executor(fluid.CPUPlace())
path = "./models"
pserver_endpoints = "127.0.0.1:1001,127.0.0.1:1002"
trainers = 4
Training_role == "PSERVER"
config = fluid.DistributeTranspilerConfig()
t = fluid.DistributeTranspiler(config=config)
t.transpile(trainer_id, pservers=pserver_endpoints, trainers=trainers, sync_mode=True)
if training_role == "PSERVER":
current_endpoint = "127.0.0.1:1001"
pserver_prog = t.get_pserver_program(current_endpoint)
pserver_startup = t.get_startup_program(current_endpoint, pserver_prog)
exe.run(pserver_startup)
fluid.io.load_persistables(exe, path, pserver_startup)
exe.run(pserver_prog)
if training_role == "TRAINER":
main_program = t.get_trainer_program()
exe.run(main_program)
pserver_endpoints = "127.0.0.1:1001,127.0.0.1:1002"
trainers = 4
Training_role == "PSERVER"
current_endpoint = "127.0.0.1:1002"
config = fluid.DistributeTranspilerConfig()
t = fluid.DistributeTranspiler(config=config)
t.transpile(trainer_id, pservers=pserver_endpoints, trainers=trainers, sync_mode=True, current_endpoint=current_endpoint)
if training_role == "PSERVER":
pserver_prog = t.get_pserver_program(current_endpoint)
pserver_startup = t.get_startup_program(current_endpoint, pserver_prog)
exe.run(pserver_startup)
fluid.io.load_persistables(exe, path, pserver_startup)
exe.run(pserver_prog)
if training_role == "TRAINER":
main_program = t.get_trainer_program()
exe.run(main_program)
In the above example, each PServer obtains the parameters saved by trainer 0 by calling the HDFS command, and obtains the PServer's :code:`fluid.Program` by configuration. PaddlePaddle Fluid will find all persistable variables in all model variables from this :code:`fluid.Program` , e.t. :code:`pserver_startup` , and load them from the specified :code:`path` directory.
......@@ -6,62 +6,359 @@ Fleet
.. py:class:: paddle.distributed.fleet.Fleet
Fleet是飞桨分布式训练统一API, 只需要import fleet并简单初始化后即可快速开始使用飞桨大规模分布式训练
.. py:method:: init(role_maker=None, is_collective=False)
使用RoleMaker或其他配置初始化fleet
参数:
role_maker (RoleMakerBase) 已初始化好的PaddleCloudRoleMakerUserDefineRoleMaker
is_collective (bool) 在未指定role_maker的情况下,可由init方法自行初始化RoleMaker, is_collectiveTrue则按照collective模式进行创建, is_collective=False则按照ParameterServer模式进行创建
返回:None
**代码示例1**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
**代码示例2**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init(is_collective=True)
**代码示例3**
.. code-block:: python
import paddle.distributed.fleet as fleet
role = fleet.PaddleCloudRoleMaker()
fleet.init(role)
.. py:method:: is_first_worker()
返回当前节点是否为第一个`worker`节点, 判断当前worker_index是否为0 如果为0则返回True,否则返回False
返回:True/False
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.is_first_worker()
.. py:method:: worker_index()
返回当前节点的编号, 每个`worker`节点被分配[0, worker_num-1]内的唯一的编码ID
返回:int
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.worker_index()
.. py:method:: worker_num()
返回当前全部训练节点中`workjer`节点的个数
返回:int
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.worker_num()
.. py:method:: is_worker()
返回当前节点是否为`worker`节点
返回:True/False
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.is_worker()
.. py:method:: worker_endpoints(to_string=False)
返回全部worker节点的ip及端口信息
返回:list/string
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.worker_endpoints()
.. py:method:: server_num()
**注意:**
**该参数只在ParameterServer模式下生效**
返回当前全部Server节点的个数
返回:int
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.server_num()
.. py:method:: server_index()
**注意:**
**该参数只在ParameterServer模式下生效**
返回当前节点的编号, 每个`server`节点被分配[0, server_num-1]内的唯一的编码ID
返回:int
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.server_index()
.. py:method:: server_endpoints(to_string=False)
**注意:**
**该参数只在ParameterServer模式下生效**
返回全部server节点的ip及端口信息
返回:list/string
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.server_endpoints()
.. py:method:: is_server()
**注意:**
**该参数只在ParameterServer模式下生效**
返回当前节点是否为`server`节点
返回:True/False
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.is_server()
.. py:method:: barrier_worker()
调用集合通信功能,强制要求所有的worker在此处相互等待一次
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.barrier_worker()
.. py:method:: init_worker()
worker节点在训练前的初始化, 包括通信模块, 参数同步等
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.init_worker()
.. py:method:: init_server(*args, **kwargs)
server节点的初始化, 包括server端参数初始化,模型加载等
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.init_server()
.. py:method:: run_server()
server节点的运行, 此命令会将ParameterServer的进程启动并常驻直至训练结束
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.init_server()
fleet.run_server()
.. py:method:: stop_worker()
停止当前正在运行的worker节点
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
fleet.init()
fleet.init_worker()
"..."
fleet.stop_worker()
.. py:method:: save_inference_model(executor, dirname, feeded_var_names, target_vars, main_program=None, export_for_deployment=True)
修剪指定的 ``main_program`` 以构建一个专门用于预测的 ``Inference Program`` ``Program`` 含义详见 :ref:`api_guide_Program` )。 所得到的 ``Inference Program`` 及其对应的所>有相关参数均被保存到 ``dirname`` 指定的目录中。
参数:
- **executor** (Executor) 用于保存预测模型的 ``executor`` ,详见 :ref:`api_guide_executor`
- **dirname** (str) 指定保存预测模型结构和参数的文件目录。
- **feeded_var_names** (list[str]) 字符串列表,包含着Inference Program预测时所需提供数据的所有变量名称(即所有输入变量的名称)。
- **target_vars** (list[Variable]) ``Variable`` (详见 :ref:`api_guide_Program` )类型列表,包含着模型的所有输出变量。通过这些输出变量即可得到模型的预测结果。
- **main_program** (Program,可选) 通过该参数指定的 ``main_program`` 可构建一个专门用于预测的 ``Inference Program`` 若为None, 则使用全局默认的 ``_main_program_`` >默认值为None
- **export_for_deployment** (bool,可选) 若为True,则 ``main_program`` 指定的Program将被修改为只支持直接预测部署的Program。否则,将存储更多的信息,方便优化和再训练。目前
只支持设置为True,且默认值为True
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
import paddle.fluid as fluid
fleet.init()
# build net
# fleet.distributed_optimizer(...)
exe = fluid.Executor(fluid.CPUPlace())
fleet.save_inference_model(exe, "dirname", ["feednames1"], [acc, loss], fluid.default_main_program())
.. py:method:: save_persistables(executor, dirname, main_program=None)
保存全量模型参数
参数:
- **executor** (Executor) 用于保存持久性变量的 ``executor`` ,详见 :ref:`api_guide_executor`
- **dirname** (str) 用于储存持久性变量的文件目录。
- **main_program** (Program,可选) 需要保存持久性变量的Program ``Program`` 含义详见 :ref:`api_guide_Program` )。如果为None,则使用default_main_Program 。默认值为None>
返回:无
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
import paddle.fluid as fluid
fleet.init()
# build net
# fleet.distributed_optimizer(...)
exe = fluid.Executor(fluid.CPUPlace())
fleet.save_persistables(exe, "dirname", fluid.default_main_program())
.. py:method:: distributed_optimizer(optimizer, strategy=None)
基于分布式布式并行策略进行模型的拆分及优化。
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
role = fleet.role_maker.PaddleCloudRoleMaker(is_collective=True)
fleet.init(role)
strategy = fleet.DistributedStrategy()
optimizer = paddle.optimizer.SGD(learning_rate=0.001)
optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)
.. py:method:: distributed_model(model)
......
......@@ -5,6 +5,41 @@ PaddleCloudRoleMaker
.. py:class:: paddle.distributed.fleet.PaddleCloudRoleMaker
PaddleCloudRoleMaker是基于从环境变量中获取分布式相关信息进行分布式配置初始化的接口.
它会自动根据用户在环境变量中的配置进行分布式训练环境初始化,目前PaddleCloudRoleMaker支持ParameterServer分布式训练及Collective分布式训练两种模式的初始化。
**代码示例**
.. code-block:: python
import os
import paddle.distributed.fleet as fleet
os.environ["PADDLE_PSERVER_NUMS"] = "2"
os.environ["PADDLE_TRAINERS_NUM"] = "2"
os.environ["POD_IP"] = "127.0.0.1"
os.environ["PADDLE_PORT"] = "36001"
os.environ["TRAINING_ROLE"] = "PSERVER"
os.environ["PADDLE_PSERVERS_IP_PORT_LIST"] = \
"127.0.0.1:36001,127.0.0.2:36001"
os.environ["PADDLE_TRAINER_ID"] = "0"
fleet.PaddleCloudRoleMaker(is_collective=False)
.. py:method:: to_string()
将当前环境变量以字符串的形式输出
返回: string
**代码示例**:
.. code-block:: python
import paddle.distributed.fleet as fleet
role = fleet.PaddleCloudRoleMaker(is_collective=False)
role.to_string()
......@@ -5,6 +5,45 @@ UserDefinedRoleMaker
.. py:class:: paddle.distributed.fleet.UserDefinedRoleMaker
UserDefinedRoleMaker是基于从用户自定义的参数中获取分布式相关信息进行分布式配置初始化的接口
它会自动根据用户的自定义配置进行分布式训练环境初始化,目前UserDefinedRoleMaker支持ParameterServer分布式训练及Collective分布式训练两种模式的初始化。
**代码示例**
.. code-block:: python
import paddle.distributed.fleet as fleet
from paddle.distributed.fleet.base.role_maker import Role
fleet.UserDefinedRoleMaker(
current_id=0,
role=Role.SERVER,
worker_num=2,
server_endpoints=["127.0.0.1:36011", "127.0.0.1:36012"])
.. py:method:: to_string()
将当前环境变量以字符串的形式输出
返回: string
**代码示例**:
.. code-block:: python
import paddle.distributed.fleet as fleet
from paddle.distributed.fleet.base.role_maker import Role
role = fleet.UserDefinedRoleMaker(
current_id=0,
role=Role.SERVER,
worker_num=2,
server_endpoints=["127.0.0.1:36011", "127.0.0.1:36012"])
role.to_string()
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册