Created by: guru4elephant
PR types
New features
PR changes
APIs
Describe
unify distributed_strategy in paddle.fleet. All distributed training strategy should be configured through this class.
Propose to formalize the paddle.fleet api as follows:
- base
- distributed_strategy.py
- all distributed strategies are implemented here, including various kinds of distributed optimizers
- the distributed strategy can be serialized into protobuf file
- distributed job info is defined in distributed_strategy.py, including distributed runtime information such as endpoints, node num, etc.
- a distributed training job can be configured through serialized distributed strategy and distributed job info.
- fleet_base.py
- the only entry of distributed training of Paddle
- singleton instance
- a Fleet() object is initialized through RoleMaker
- Util and DistributedOptimizer(either collective、ps or other distributed optimizer) can be initialized by RoleMaker
- obj_creator.py
- obj_creator helps to create Util and DistributedOptimizer
- role_maker.py
- PaddleCloudRoleMaker and UserDefinedRoleMaker can be defined currently
- different kinds of RoleMaker is specified by arguments of
__init__(self, role_maker)
- util_base.py
- implement all Util classes, Util Base class can be inherited so that a third party Util class can be used.
- distributed_strategy.py
- collective
- collective_distributed_optimizer.py
- define collective training
- collective_distributed_optimizer.py
- parameter_server
- ps_distributed_optimizer.py
- define ps training
- ps_distributed_optimizer.py
- metrics
- meric.py
- distributed metrics
- meric.py
What is paddle.fleet?
- paddle.fleet is the official entry point for distributed training of Paddle. The motivation of paddle.fleet is to make distributed training efficient and easy to configure.
-
RoleMaker, DistributedStrategy and DistributedOptimizer are the key concepts to understand in paddle.fleet
- RoleMaker : the role of a node in distributed cluster is specified by RoleMaker, such as worker or pserver.
- DistributedStrategy : paddle.fleet supports lots of distributed training algorithm and scalable configurations all defined in DistributedStrategy, such as hierarchical all reduce, deep gradient compression and async parameter server training, etc.
- DistributedOptimizer : in general, three kinds of optimizers exist in paddle. 1) commonly used local optimizer, such as SGD and Adam. 2) decorator of local optimzier, such RecomputeOptimizer, PipelineOptimizer and LookAheadOptimizer that often wraps a local optimizer as an inner optimizer. 3) distributed decorator of 1)、2) or combination of 1) and different 2), such as collective distributed optimizer and parameter server distributed optimzier. paddle.fleet is the third kind of optimizer that users can define.