diff --git a/doc/fluid/advanced_guide/distributed_training/fleet_api_howto_en.rst b/doc/fluid/advanced_guide/distributed_training/fleet_api_howto_en.rst new file mode 100644 index 0000000000000000000000000000000000000000..e64c81210d825690af0fbb92f9ed96959e129aae --- /dev/null +++ b/doc/fluid/advanced_guide/distributed_training/fleet_api_howto_en.rst @@ -0,0 +1,147 @@ +.. role:: raw-html-m2r(raw) + :format: html + + +Fleet +===== + +**Fleet** is High-Level API for distributed training in PaddlePaddle. The name of **Fleet** means that a large crowd of ships working together to finish a large scale job. The design of Fleet makes a trade-off between easy-to-use and algorithmic extensibility and is highly efficient. First, a user can shift from local machine paddlepaddle code to distributed code **within ten lines of code**. Second, different algorithms can be easily defined through **distributed strategy** through Fleet API. Finally, distributed training is **extremely fast** with Fleet and just enjoy it. + +**Note: all the examples here should be replicated from develop branch of Paddle** + +Fleet is Highly Efficient +------------------------- + +Deep neural networks training with Fleet API is highly efficient in PaddlePaddle. We benchmark serveral standard models here. + +Parameter Server Training +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Parameter server training benchmark is performed on click through rate estimation task on `Criteo Dataset `_ and Semantic Representation Learning on `One-billion word Dataset `_. Details of hardware and software information for this benchmark can be found in `parameter server benchmark `_. + +:raw-html-m2r:`

` +:raw-html-m2r:`` + + +.. raw:: html + +

+ + + +Collective Training +^^^^^^^^^^^^^^^^^^^ + +Collective Training is usually used in GPU training in PaddlePaddle. Benchmark of collective training with Fleet is as follows. Details of hardware and software information for this benchmark can be found in `benchmark environment `_. + +:raw-html-m2r:`

` +:raw-html-m2r:`` + + +.. raw:: html + +

+ + + +Mixed precision accelerated collective training throughput +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:raw-html-m2r:`` + +Fleet is Easy To Use +-------------------- + +Fleet is easy to use for both collective training and parameter server training. Here is an example for collective training with Fleet. + +Local Single GPU Cards Training + +.. code-block:: python + + import paddle.fluid as fluid + from utils import gen_data + from nets import mlp + + input_x = fluid.layers.data(name="x", shape=[32], dtype='float32') + input_y = fluid.layers.data(name="y", shape=[1], dtype='int64') + + cost = mlp(input_x, input_y) + optimizer = fluid.optimizer.SGD(learning_rate=0.01) + optimizer.minimize(cost, fluid.default_startup_program()) + + train_prog = fluid.default_main_program() + gpu_id = int(os.environ.get('FLAGS_selected_gpus', 0)) + place = fluid.CUDAPlace(gpu_id) if args.use_gpu else fluid.CPUPlace() + + exe = fluid.Executor(place) + exe.run(fluid.default_startup_program()) + + step = 1001 + for i in range(step): + cost_val = exe.run(program=train_prog, feed=gen_data(), fetch_list=[cost.name]) + +Local Multiple GPU Cards Training + +.. code-block:: python + + import paddle.fluid as fluid + from utils import gen_data + from nets import mlp + from paddle.fluid.incubate.fleet.collective import fleet, DistributedStrategy # new line 1 + from paddle.fluid.incubate.fleet.base import role_maker # new line 2 + + input_x = fluid.layers.data(name="x", shape=[32], dtype='float32') + input_y = fluid.layers.data(name="y", shape=[1], dtype='int64') + + cost = mlp(input_x, input_y) + optimizer = fluid.optimizer.SGD(learning_rate=0.01) + + role = role_maker.PaddleCloudRoleMaker(is_collective=True) # new line 3 + fleet.init(role) # new line 4 + + optimizer = fleet.distributed_optimizer(optimizer, strategy=DistributedStrategy()) # new line 5 + optimizer.minimize(cost, fluid.default_startup_program()) + + train_prog = fleet.main_program # change line 1 + place = fluid.CUDAPlace(int(os.environ['FLAGS_selected_gpus'])) # change line 2 + + exe = fluid.Executor(place) + exe.run(fluid.default_startup_program()) + + step = 1001 + for i in range(step): + cost_val = exe.run(program=train_prog, feed=gen_data(), fetch_list=[cost.name]) + +Launch command: + +.. code-block:: + + python -m paddle.distributed.launch --selected_gpus="0,1,2,3" trainer.py + +More Examples +------------- + + +* + `Click Through Estimation `_ + +* + `Distribute CTR `_ + +* + `DeepFM `_ + +* + `Semantic Matching `_ + +* + `Word2Vec `_ + +* + `Resnet50 on Imagenet `_ + +* + `Transformer on En-De `_ + +* + `Bert on English Wikipedia `_ diff --git a/doc/fluid/advanced_guide/distributed_training/src/fleet_collective_benchmark_refine3.png b/doc/fluid/advanced_guide/distributed_training/src/fleet_collective_benchmark_refine3.png new file mode 100644 index 0000000000000000000000000000000000000000..b75d7f53f3820a3b6d4b5c9cbbbd9b891cec5a02 Binary files /dev/null and b/doc/fluid/advanced_guide/distributed_training/src/fleet_collective_benchmark_refine3.png differ diff --git a/doc/fluid/advanced_guide/distributed_training/src/fleet_collective_mixed_precision_training.png b/doc/fluid/advanced_guide/distributed_training/src/fleet_collective_mixed_precision_training.png new file mode 100644 index 0000000000000000000000000000000000000000..3fa1c0a217eb85e6b3073997941045b4b1869eaf Binary files /dev/null and b/doc/fluid/advanced_guide/distributed_training/src/fleet_collective_mixed_precision_training.png differ diff --git a/doc/fluid/advanced_guide/distributed_training/src/fleet_ps_benchmark_refine.png b/doc/fluid/advanced_guide/distributed_training/src/fleet_ps_benchmark_refine.png new file mode 100644 index 0000000000000000000000000000000000000000..16f0f5405973fa281a3ae467969cea89ab412afb Binary files /dev/null and b/doc/fluid/advanced_guide/distributed_training/src/fleet_ps_benchmark_refine.png differ