train_with_DALI_en.md 2.6 KB
Newer Older
1 2 3 4 5 6 7 8
# Train with DALI

## Preface
[The NVIDIA Data Loading Library](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html) (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It can build Dataloader of Paddle.

Since  the Deep learning relies on a large amount of data in the training stage, these data need to be loaded and preprocessed. These operations are usually executed on the CPU, which limits the further improvement of the training speed, especially when the batch_size is large, which become the bottleneck of speed. DALI can use GPU to accelerate these operations, thereby further improve the training speed.

## Installing DALI
G
gaotingquan 已提交
9
DALI only support Linux x64 and version of CUDA is 10.2 or later.
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

* For CUDA 10:

    pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda100

* For CUDA 11.0:

    pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda110

For more information about installing DALI, please refer to [DALI](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/installation.html).

## Using DALI
Paddleclas supports training with DALI in static graph. Since DALI only supports GPU training, `CUDA_VISIBLE_DEVICES` needs to be set, and DALI needs to occupy GPU memory, so it needs to reserve GPU memory for Dali. To train with DALI, just set the fields in the training config `use_dali = True`, or start the training by the following command:

```shell
# set the GPUs that can be seen
export CUDA_VISIBLE_DEVICES="0"

G
gaotingquan 已提交
28
# set the GPU memory used for neural network training, generally 0.8 or 0.7, and the remaining GPU memory is reserved for DALI
29 30 31 32 33 34 35 36 37 38 39
export FLAGS_fraction_of_gpu_memory_to_use=0.80

python tools/static/train.py -c configs/ResNet/ResNet50.yaml -o use_dali=True
```

And you can train with muti-GPUs:

```shell
# set the GPUs that can be seen
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"

G
gaotingquan 已提交
40
# set the GPU memory used for neural network training, generally 0.8 or 0.7, and the remaining GPU memory is reserved for DALI
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
export FLAGS_fraction_of_gpu_memory_to_use=0.80

python -m paddle.distributed.launch \
    --gpus="0,1,2,3,4,5,6,7" \
    tools/static/train.py \
        -c ./configs/ResNet/ResNet50.yaml \
        -o use_dali=True
```

## Train with FP16

On the basis of the above, using FP16 half-precision can further improve the training speed, just add fields in the start training command `AMP.use_pure_fp16=True`:

```shell
python tools/static/train.py -c configs/ResNet/ResNet50.yaml -o use_dali=True -o AMP.use_pure_fp16=True
```