README.md 4.0 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
# Fluid Benchmark

This directory contains several models configurations and tools that used to run
Fluid benchmarks for local and distributed training.


## Run the Benchmark

To start, run the following command to get the full help message:

```bash
python fluid_benchmark.py --help
```

Currently supported `--model` argument include:

* mnist
* resnet
    * you can chose to use different dataset using `--data_set cifar10` or
      `--data_set flowers`.
* vgg
* stacked_dynamic_lstm
* machine_translation

* Run the following command to start a benchmark job locally:
    ```bash
27
      python fluid_benchmark.py --model mnist --device GPU
28 29
    ```
    You can choose to use GPU/CPU training. With GPU training, you can specify
G
gongweibao 已提交
30
    `--gpus <gpu_num>` to run multi GPU training.
31 32
    You can set async mode parameter server. With async mode, you can specify
    `--async_mode` to train model asynchronous.
33
* Run distributed training with parameter servers:
G
gongweibao 已提交
34
    * see [run_fluid_benchmark.sh](https://github.com/PaddlePaddle/Paddle/blob/develop/benchmark/fluid/run_fluid_benchmark.sh) as an example.
35 36
    * start parameter servers:
        ```bash
G
gongweibao 已提交
37
        PADDLE_TRAINING_ROLE=PSERVER PADDLE_PSERVER_PORT=7164 PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=1 PADDLE_CURRENT_IP=127.0.0.1 PADDLE_TRAINER_ID=0 python fluid_benchmark.py --model mnist  --device GPU --update_method pserver
X
Xin Pan 已提交
38
        sleep 15
39 40 41
        ```
    * start trainers:
        ```bash
G
gongweibao 已提交
42
        PADDLE_TRAINING_ROLE=TRAINER PADDLE_PSERVER_PORT=7164 PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=1 PADDLE_CURRENT_IP=127.0.0.1 PADDLE_TRAINER_ID=0 python fluid_benchmark.py --model mnist  --device GPU --update_method pserver
43 44 45
        ```
* Run distributed training using NCCL2
    ```bash
G
gongweibao 已提交
46
    PADDLE_PSERVER_PORT=7164 PADDLE_TRAINER_IPS=192.168.0.2,192.168.0.3  PADDLE_CURRENT_IP=127.0.0.1 PADDLE_TRAINER_ID=0 python fluid_benchmark.py --model mnist --device GPU --update_method nccl2
47 48
    ```

Y
yi.wu 已提交
49 50 51
## Prepare the RecordIO file to Achieve Better Performance

Run the following command will generate RecordIO files like "mnist.recordio" under the path
Y
yi.wu 已提交
52 53
and batch_size you choose, you can use batch_size=1 so that later reader can change the batch_size
at any time using `fluid.batch`.
Y
yi.wu 已提交
54 55

```bash
Y
yi.wu 已提交
56
python -c 'from recordio_converter import *; prepare_mnist("data", 1)'
Y
yi.wu 已提交
57 58
```

59 60
## Run Distributed Benchmark on Kubernetes Cluster

W
Wu Yi 已提交
61
You may need to build a Docker image before submitting a cluster job onto Kubernetes, or you will
翟飞跃 已提交
62
have to start all those processes manually on each node, which is not recommended.
W
Wu Yi 已提交
63 64 65 66 67 68 69 70 71 72 73 74

To build the Docker image, you need to choose a paddle "whl" package to run with, you may either
download it from
http://www.paddlepaddle.org/docs/develop/documentation/zh/build_and_install/pip_install_en.html or
build it by your own. Once you've got the "whl" package, put it under the current directory and run:

```bash
docker build -t [your docker image name]:[your docker image tag] .
```

Then push the image to a Docker registry that your Kubernetes cluster can reach.

75 76 77 78
We provide a script `kube_gen_job.py` to generate Kubernetes yaml files to submit
distributed benchmark jobs to your cluster. To generate a job yaml, just run:

```bash
W
Wu Yi 已提交
79
python kube_gen_job.py --jobname myjob --pscpu 4 --cpu 8 --gpu 8 --psmemory 20 --memory 40 --pservers 4 --trainers 4 --entry "python fluid_benchmark.py --model mnist --gpus 8 --device GPU --update_method pserver " --disttype pserver
80 81 82 83 84 85 86 87 88
```

Then the yaml files are generated under directory `myjob`, you can run:

```bash
kubectl create -f myjob/
```

The job shall start.
W
Wu Yi 已提交
89 90 91 92 93 94 95 96 97 98 99


## Notes for Run Fluid Distributed with NCCL2 and RDMA

Before running NCCL2 distributed jobs, please check that whether your node has multiple network
interfaces, try to add the environment variable `export NCCL_SOCKET_IFNAME=eth0` to use your actual
network device.

To run high-performance distributed training, you must prepare your hardware environment to be
able to run RDMA enabled network communication, please check out [this](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/howto/cluster/nccl2_rdma_training.md)
note for details.