Set --use_fp16=True to sart Automatic Mixed Precision (AMP) Training. During the training process, the float16 data type will be used to speed up the training performance. You may need to use the --scale_loss parameter to avoid the accuracy dropping, such as setting --scale_loss=128.0.
```bash
python train.py \
--model=ResNet50 \
--fp16=True \
--scale_loss=0.8
```
After configuring the data path (modify the value of `DATA_DIR` in [scripts/train/ResNet50_fp16.sh](scripts/train/ResNet50_fp16.sh)), you can enable ResNet50 to start AMP Training by executing the command of `bash run.sh train ResNet50_fp16`.
Refer to [PaddlePaddle/Fleet](https://github.com/PaddlePaddle/Fleet/tree/develop/benchmark/collective/resnet) for the multi-machine and multi-card training.
Performing on Tesla V100 single machine with 8 cards, two machines with 16 cards and four machines with 32 cards, the performance of ResNet50 AMP training is shown as below (enable DALI).
Using mixup process in training, it will return 5 results, include data_loader, image, y_a(label), y_b(label) and lamda, or it will return 3 results, include data_loader, image, and label.
Args:
Args:
is_train: mode
args: arguments
Returns:
data_loader and the input data of net,
data_loader and the input data of net,
"""
image_shape=args.image_shape
feed_image=fluid.data(
...
...
@@ -428,7 +431,7 @@ def print_info(info_mode,
time_info: time infomation
info_mode: mode
"""
#XXX: Use specific name to choose pattern, not the length of metrics.
#XXX: Use specific name to choose pattern, not the length of metrics.