未验证 提交 7c6f5dd8 编写于 作者: D Dong Daxiang 提交者: GitHub

Update fleet_api_howto_cn.rst

test=document_preview
上级 85055bad
.. _fleet_api_howto_cn:
使用FleetAPI进行分布式训练 使用FleetAPI进行分布式训练
========================== ==========================
...@@ -8,194 +7,211 @@ FleetAPI 设计说明 ...@@ -8,194 +7,211 @@ FleetAPI 设计说明
Fleet是PaddlePaddle分布式训练的高级API。Fleet的命名出自于PaddlePaddle,象征一个舰队中的多只双桨船协同工作。Fleet的设计在易用性和算法可扩展性方面做出了权衡。用户可以很容易从单机版的训练程序,通过添加几行代码切换到分布式训练程序。此外,分布式训练的算法也可以通过Fleet Fleet是PaddlePaddle分布式训练的高级API。Fleet的命名出自于PaddlePaddle,象征一个舰队中的多只双桨船协同工作。Fleet的设计在易用性和算法可扩展性方面做出了权衡。用户可以很容易从单机版的训练程序,通过添加几行代码切换到分布式训练程序。此外,分布式训练的算法也可以通过Fleet
API接口灵活定义。具体的设计原理可以参考\ `Fleet API接口灵活定义。具体的设计原理可以参考\ `Fleet
API设计文档 <https://github.com/PaddlePaddle/Fleet/blob/develop/README.md>`__\ 。当前FleetAPI还处于paddle.fluid.incubate目录下,未来功能完备后会放到paddle.fluid目录中,欢迎持续关注。 API设计文档 <https://github.com/PaddlePaddle/Fleet/blob/develop/README.md>`_\ 。当前FleetAPI还处于paddle.fluid.incubate目录下,未来功能完备后会放到paddle.fluid目录中,欢迎持续关注。
Fleet API快速上手示例 Fleet API快速上手示例
--------------------- ---------------------
下面会针对Fleet 下面会针对Fleet
API最常见的两种使用场景,用一个模型做示例,目的是让用户有快速上手体验的模板。快速上手的示例源代码可以在\ `Fleet API最常见的两种使用场景,用一个模型做示例,目的是让用户有快速上手体验的模板。快速上手的示例源代码可以在\ `Fleet Quick Start <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/quick-start>`_ 找到。
Quick
Start <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/quick-start>`__\ 找到。
假设我们定义MLP网络如下:
.. code:: python *
假设我们定义MLP网络如下:
import paddle.fluid as fluid .. code-block:: python
def mlp(input_x, input_y, hid_dim=128, label_dim=2): import paddle.fluid as fluid
fc_1 = fluid.layers.fc(input=input_x, size=hid_dim, act='tanh')
fc_2 = fluid.layers.fc(input=fc_1, size=hid_dim, act='tanh')
prediction = fluid.layers.fc(input=[fc_2], size=label_dim, act='softmax')
cost = fluid.layers.cross_entropy(input=prediction, label=input_y)
avg_cost = fluid.layers.mean(x=cost)
return avg_cost
定义一个在内存生成数据的Reader如下: def mlp(input_x, input_y, hid_dim=128, label_dim=2):
fc_1 = fluid.layers.fc(input=input_x, size=hid_dim, act='tanh')
fc_2 = fluid.layers.fc(input=fc_1, size=hid_dim, act='tanh')
prediction = fluid.layers.fc(input=[fc_2], size=label_dim, act='softmax')
cost = fluid.layers.cross_entropy(input=prediction, label=input_y)
avg_cost = fluid.layers.mean(x=cost)
return avg_cost
.. code:: python *
定义一个在内存生成数据的Reader如下:
import numpy as np .. code-block:: python
def gen_data(): import numpy as np
return {"x": np.random.random(size=(128, 32)).astype('float32'),
"y": np.random.randint(2, size=(128, 1)).astype('int64')} def gen_data():
return {"x": np.random.random(size=(128, 32)).astype('float32'),
单机Trainer定义 "y": np.random.randint(2, size=(128, 1)).astype('int64')}
^^^^^^^^^^^^^^^
*
.. code:: python 单机Trainer定义
import paddle.fluid as fluid .. code-block:: python
from nets import mlp
from utils import gen_data import paddle.fluid as fluid
from nets import mlp
input_x = fluid.layers.data(name="x", shape=[32], dtype='float32') from utils import gen_data
input_y = fluid.layers.data(name="y", shape=[1], dtype='int64')
input_x = fluid.layers.data(name="x", shape=[32], dtype='float32')
cost = mlp(input_x, input_y) input_y = fluid.layers.data(name="y", shape=[1], dtype='int64')
optimizer = fluid.optimizer.SGD(learning_rate=0.01)
optimizer.minimize(cost) cost = mlp(input_x, input_y)
place = fluid.CUDAPlace(0) optimizer = fluid.optimizer.SGD(learning_rate=0.01)
optimizer.minimize(cost)
exe = fluid.Executor(place) place = fluid.CUDAPlace(0)
exe.run(fluid.default_startup_program())
step = 1001 exe = fluid.Executor(place)
for i in range(step): exe.run(fluid.default_startup_program())
cost_val = exe.run(feed=gen_data(), fetch_list=[cost.name]) step = 1001
print("step%d cost=%f" % (i, cost_val[0])) for i in range(step):
cost_val = exe.run(feed=gen_data(), fetch_list=[cost.name])
Parameter Server训练方法 print("step%d cost=%f" % (i, cost_val[0]))
^^^^^^^^^^^^^^^^^^^^^^^^
*
参数服务器方法对于大规模数据,简单模型的并行训练非常适用,我们基于单机模型的定义给出其实用Parameter Parameter Server训练方法
Server进行训练的示例如下:
参数服务器方法对于大规模数据,简单模型的并行训练非常适用,我们基于单机模型的定义给出使用Parameter Server进行训练的示例如下:
.. code:: python
.. code-block:: python
import paddle.fluid as fluid
from nets import mlp import paddle.fluid as fluid
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet from nets import mlp
from paddle.fluid.incubate.fleet.base import role_maker from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet
from utils import gen_data from paddle.fluid.incubate.fleet.base import role_maker
from utils import gen_data
input_x = fluid.layers.data(name="x", shape=[32], dtype='float32')
input_y = fluid.layers.data(name="y", shape=[1], dtype='int64') input_x = fluid.layers.data(name="x", shape=[32], dtype='float32')
input_y = fluid.layers.data(name="y", shape=[1], dtype='int64')
cost = mlp(input_x, input_y)
optimizer = fluid.optimizer.SGD(learning_rate=0.01) cost = mlp(input_x, input_y)
optimizer = fluid.optimizer.SGD(learning_rate=0.01)
role = role_maker.PaddleCloudRoleMaker()
fleet.init(role) role = role_maker.PaddleCloudRoleMaker()
optimizer = fleet.distributed_optimizer(optimizer) fleet.init(role)
optimizer.minimize(cost) optimizer = fleet.distributed_optimizer(optimizer)
optimizer.minimize(cost)
if fleet.is_server():
fleet.init_server() if fleet.is_server():
fleet.run_server() fleet.init_server()
elif fleet.is_worker(): fleet.run_server()
place = fluid.CPUPlace() elif fleet.is_worker():
exe = fluid.Executor(place) place = fluid.CPUPlace()
exe.run(fluid.default_startup_program()) exe = fluid.Executor(place)
step = 1001 exe.run(fluid.default_startup_program())
for i in range(step): step = 1001
cost_val = exe.run( for i in range(step):
program=fluid.default_main_program(), cost_val = exe.run(
feed=gen_data(), program=fluid.default_main_program(),
fetch_list=[cost.name]) feed=gen_data(),
print("worker_index: %d, step%d cost = %f" % fetch_list=[cost.name])
(fleet.worker_index(), i, cost_val[0])) print("worker_index: %d, step%d cost = %f" %
(fleet.worker_index(), i, cost_val[0]))
Collective训练方法
^^^^^^^^^^^^^^^^^^ *
Collective训练方法
collective
training通常在GPU多机多卡训练中使用,一般在复杂模型的训练中比较常见,我们基于上面的单机模型定义给出使用Collective方法进行分布式训练的示例如下: Collective Training通常在GPU多机多卡训练中使用,一般在复杂模型的训练中比较常见,我们基于上面的单机模型定义给出使用Collective方法进行分布式训练的示例如下:
.. code:: python .. code-block:: python
import paddle.fluid as fluid import paddle.fluid as fluid
from nets import mlp from nets import mlp
from paddle.fluid.incubate.fleet.collective import fleet from paddle.fluid.incubate.fleet.collective import fleet
from paddle.fluid.incubate.fleet.base import role_maker from paddle.fluid.incubate.fleet.base import role_maker
from utils import gen_data from utils import gen_data
input_x = fluid.layers.data(name="x", shape=[32], dtype='float32') input_x = fluid.layers.data(name="x", shape=[32], dtype='float32')
input_y = fluid.layers.data(name="y", shape=[1], dtype='int64') input_y = fluid.layers.data(name="y", shape=[1], dtype='int64')
cost = mlp(input_x, input_y) cost = mlp(input_x, input_y)
optimizer = fluid.optimizer.SGD(learning_rate=0.01) optimizer = fluid.optimizer.SGD(learning_rate=0.01)
role = role_maker.PaddleCloudRoleMaker(is_collective=True) role = role_maker.PaddleCloudRoleMaker(is_collective=True)
fleet.init(role) fleet.init(role)
optimizer = fleet.distributed_optimizer(optimizer) optimizer = fleet.distributed_optimizer(optimizer)
optimizer.minimize(cost) optimizer.minimize(cost)
place = fluid.CUDAPlace(0) place = fluid.CUDAPlace(0)
exe = fluid.Executor(place) exe = fluid.Executor(place)
exe.run(fluid.default_startup_program()) exe.run(fluid.default_startup_program())
step = 1001 step = 1001
for i in range(step): for i in range(step):
cost_val = exe.run( cost_val = exe.run(
program=fluid.default_main_program(), program=fluid.default_main_program(),
feed=gen_data(), feed=gen_data(),
fetch_list=[cost.name]) fetch_list=[cost.name])
print("worker_index: %d, step%d cost = %f" % print("worker_index: %d, step%d cost = %f" %
(fleet.worker_index(), i, cost_val[0])) (fleet.worker_index(), i, cost_val[0]))
更多使用示例 更多使用示例
------------ ------------
`点击率预估 <>`__ `点击率预估 <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/ctr>`_
`语义匹配 <>`__ `语义匹配 <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/semantic_matching>`_
`向量学习 <>`__ `向量学习 <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/word2vec>`_
`基于Resnet50的图像分类 <>`__ `基于Resnet50的图像分类 <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/resnet50>`_
`基于Transformer的机器翻译 <>`__ `基于Transformer的机器翻译 <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/transformer>`_
`基于Bert的语义表示学习 <>`__ `基于Bert的语义表示学习 <https://github.com/PaddlePaddle/Fleet/tree/develop/examples/bert>`_
Fleet API相关的接口说明 Fleet API相关的接口说明
----------------------- -----------------------
Fleet API接口 Fleet API接口
~~~~~~~~~~~~~ ^^^^^^^^^^^^^
- init(role\_maker=None)
- fleet初始化,需要在使用fleet其他接口前先调用,用于定义多机的环境配置 * init(role_maker=None)
- is\_worker()
- Parameter * fleet初始化,需要在使用fleet其他接口前先调用,用于定义多机的环境配置
Server训练中使用,判断当前节点是否是Worker节点,是则返回True,否则返回False
- is\_server(model\_dir=None) * is_worker()
- Parameter
Server训练中使用,判断当前节点是否是Server节点,是则返回True,否则返回False * Parameter Server训练中使用,判断当前节点是否是Worker节点,是则返回True,否则返回False
- init\_server()
- Parameter * is_server(model_dir=None)
Server训练中,fleet加载model\_dir中保存的模型相关参数进行parameter
server的初始化 * Parameter Server训练中使用,判断当前节点是否是Server节点,是则返回True,否则返回False
- run\_server()
- Parameter Server训练中使用,用来启动server端服务 * init_server()
- init\_worker()
- Parameter Server训练中使用,用来启动worker端服务 * Parameter Server训练中,fleet加载model_dir中保存的模型相关参数进行parameter
- stop\_worker() server的初始化
- 训练结束后,停止worker
- distributed\_optimizer(optimizer, strategy=None) * run_server()
- 分布式优化算法装饰器,用户可带入单机optimizer,并配置分布式训练策略,返回一个分布式的optimizer
* Parameter Server训练中使用,用来启动server端服务
* init_worker()
* Parameter Server训练中使用,用来启动worker端服务
* stop_worker()
* 训练结束后,停止worker
* distributed_optimizer(optimizer, strategy=None)
* 分布式优化算法装饰器,用户可带入单机optimizer,并配置分布式训练策略,返回一个分布式的optimizer
RoleMaker RoleMaker
~~~~~~~~~ ^^^^^^^^^
*
MPISymetricRoleMaker
- MPISymetricRoleMaker
- 描述:MPISymetricRoleMaker会假设每个节点启动两个进程,1worker+1pserver,这种RoleMaker要求用户的集群上有mpi环境。 *
描述:MPISymetricRoleMaker会假设每个节点启动两个进程,1worker+1pserver,这种RoleMaker要求用户的集群上有mpi环境。
- 示例: *
示例:
.. code:: python .. code-block:: python
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet
from paddle.fluid.incubate.fleet.base import role_maker from paddle.fluid.incubate.fleet.base import role_maker
...@@ -203,19 +219,24 @@ RoleMaker ...@@ -203,19 +219,24 @@ RoleMaker
role = role_maker.MPISymetricRoleMaker() role = role_maker.MPISymetricRoleMaker()
fleet.init(role) fleet.init(role)
- 启动方法: *
启动方法:
.. code:: shell .. code-block:: python
mpirun -np 2 python trainer.py mpirun -np 2 python trainer.py
- PaddleCloudRoleMaker *
PaddleCloudRoleMaker
- 描述:PaddleCloudRoleMaker是一个高级封装,支持使用paddle.distributed.launch或者paddle.distributed.launch\_ps启动脚本
- Parameter Server训练示例: *
描述:PaddleCloudRoleMaker是一个高级封装,支持使用paddle.distributed.launch或者paddle.distributed.launch_ps启动脚本
.. code:: python *
Parameter Server训练示例:
.. code-block:: python
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet
from paddle.fluid.incubate.fleet.base import role_maker from paddle.fluid.incubate.fleet.base import role_maker
...@@ -223,15 +244,17 @@ RoleMaker ...@@ -223,15 +244,17 @@ RoleMaker
role = role_maker.PaddleCloudRoleMaker() role = role_maker.PaddleCloudRoleMaker()
fleet.init(role) fleet.init(role)
- 启动方法: *
启动方法:
.. code:: python .. code-block:: python
python -m paddle.distributed.launch_ps --worker_num 2 --server_num 2 trainer.py python -m paddle.distributed.launch_ps --worker_num 2 --server_num 2 trainer.py
- Collective训练示例: *
Collective训练示例:
.. code:: python .. code-block:: python
from paddle.fluid.incubate.fleet.collective import fleet from paddle.fluid.incubate.fleet.collective import fleet
from paddle.fluid.incubate.fleet.base import role_maker from paddle.fluid.incubate.fleet.base import role_maker
...@@ -239,19 +262,24 @@ RoleMaker ...@@ -239,19 +262,24 @@ RoleMaker
role = role_maker.PaddleCloudRoleMaker(is_collective=True) role = role_maker.PaddleCloudRoleMaker(is_collective=True)
fleet.init(role) fleet.init(role)
- 启动方法: *
启动方法:
.. code-block:: python
.. code:: python python -m paddle.distributed.launch trainer.py
python -m paddle.distributed.launch trainer.py *
UserDefinedRoleMaker
- UserDefinedRoleMaker
- 描述:用户自定义节点的角色信息,IP和端口信息 *
描述:用户自定义节点的角色信息,IP和端口信息
- 示例: *
示例:
.. code:: python .. code-block:: python
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet
from paddle.fluid.incubate.fleet.base import role_maker from paddle.fluid.incubate.fleet.base import role_maker
...@@ -265,21 +293,32 @@ RoleMaker ...@@ -265,21 +293,32 @@ RoleMaker
fleet.init(role) fleet.init(role)
Strategy Strategy
~~~~~~~~ ^^^^^^^^
- Parameter Server Training * Parameter Server Training
- Sync\_mode
- Collective Training * Sync_mode
- LocalSGD
- ReduceGrad * Collective Training
* LocalSGD
* ReduceGrad
Fleet Mode Fleet Mode
~~~~~~~~~~ ^^^^^^^^^^
*
Parameter Server Training
.. code-block:: python
- Parameter Server Training from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet
``python from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet`` *
Collective Training
- Collective Training .. code-block:: python
``python from paddle.fluid.incubate.fleet.collective import fleet`` from paddle.fluid.incubate.fleet.collective import fleet
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册