diff --git a/fluid/PaddleCV/image_classification/README.md b/fluid/PaddleCV/image_classification/README.md index 1de071c0de37d4f265a8dabab2e97b5cda3ecc4c..c1f62b38f9d75a22737ce872068b337c539196ba 100644 --- a/fluid/PaddleCV/image_classification/README.md +++ b/fluid/PaddleCV/image_classification/README.md @@ -6,6 +6,7 @@ Image classification, which is an important field of computer vision, is to clas - [Installation](#installation) - [Data preparation](#data-preparation) - [Training a model with flexible parameters](#training-a-model) +- [Using Mixed-Precision Training](#using-mixed-precision-training) - [Finetuning](#finetuning) - [Evaluation](#evaluation) - [Inference](#inference) @@ -112,6 +113,13 @@ The error rate curves of AlexNet, ResNet50 and SE-ResNeXt-50 are shown in the fi Training and validation Curves

+ +## Using Mixed-Precision Training + +You may add `--fp16 1` to start train using mixed precisioin training, which the training process will use float16 and the output model ("master" parameters) is saved as float32. You also may need to pass `--scale_loss` to overcome accuracy issues, usually `--scale_loss 8.0` will do. + +Note that currently `--fp16` can not use together with `--with_mem_opt`, so pass `--with_mem_opt 0` to disable memory optimization pass. + ## Finetuning Finetuning is to finetune model weights in a specific task by loading pretrained weights. After initializing ```path_to_pretrain_model```, one can finetune a model as: diff --git a/fluid/PaddleCV/image_classification/README_cn.md b/fluid/PaddleCV/image_classification/README_cn.md index 3fc3b934e9b3bf1088409b270e2756101c539f18..373443a207914ee54b158b05514b3789c8fa79b5 100644 --- a/fluid/PaddleCV/image_classification/README_cn.md +++ b/fluid/PaddleCV/image_classification/README_cn.md @@ -109,6 +109,11 @@ End pass 9, train_loss 3.3745200634, train_acc1 0.303871691227, train_acc5 0.545 训练集合与验证集合上的错误率曲线

+## 混合精度训练 + +可以通过开启`--fp16 1`启动混合精度训练,这样训练过程会使用float16数据,并输出float32的模型参数("master"参数)。您可能需要同时传入`--scale_loss`来解决fp16训练的精度问题,通常传入`--scale_loss 8.0`即可。 + +注意,目前混合精度训练不能和内存优化功能同时使用,所以需要传`--with_mem_opt 0`这个参数来禁用内存优化功能。 ## 参数微调