未验证 提交 f57d8a22 编写于 作者: R ruri 提交者: GitHub

Revert "remove memory optimize usage in scripts, test=develop (#3242)" (#3247)

This reverts commit 5e41760f.
上级 f6b76c1c
......@@ -68,6 +68,7 @@ python train.py \
--class_dim=1000 \
--image_shape=3,224,224 \
--model_save_dir=output/ \
--with_mem_opt=False \
--with_inplace=True \
--lr_strategy=piecewise_decay \
--lr=0.1
......@@ -82,6 +83,7 @@ python train.py \
* **class_dim**: the class number of the classification task. Default: 1000.
* **image_shape**: input size of the network. Default: "3,224,224".
* **model_save_dir**: the directory to save trained model. Default: "output".
* **with_mem_opt**: whether to use memory optimization or not. Default: False.
* **with_inplace**: whether to use inplace memory optimization or not. Default: True.
* **lr_strategy**: learning rate changing strategy. Default: "piecewise_decay".
* **lr**: initialized learning rate. Default: 0.1.
......@@ -152,6 +154,8 @@ Note: Add and adjust other parameters accroding to specific models and tasks.
You may add `--fp16=1` to start train using mixed precisioin training, which the training process will use float16 and the output model ("master" parameters) is saved as float32. You also may need to pass `--scale_loss` to overcome accuracy issues, usually `--scale_loss=8.0` will do.
Note that currently `--fp16` can not use together with `--with_mem_opt`, so pass `--with_mem_opt=0` to disable memory optimization pass.
### CE
CE is only for internal testing, don't have to set it.
......
......@@ -64,6 +64,7 @@ python train.py \
--class_dim=1000 \
--image_shape=3,224,224 \
--model_save_dir=output/ \
--with_mem_opt=False \
--with_inplace=True \
--lr_strategy=piecewise_decay \
--lr=0.1
......@@ -78,6 +79,7 @@ python train.py \
* **class_dim**: 类别数,默认值: 1000
* **image_shape**: 图片大小,默认值: "3,224,224"
* **model_save_dir**: 模型存储路径,默认值: "output/"
* **with_mem_opt**: 是否开启显存优化,默认值: False
* **with_inplace**: 是否开启inplace显存优化,默认值: True
* **lr_strategy**: 学习率变化策略,默认值: "piecewise_decay"
* **lr**: 初始学习率,默认值: 0.1
......@@ -140,6 +142,8 @@ python infer.py \
可以通过开启`--fp16=True`启动混合精度训练,这样训练过程会使用float16数据,并输出float32的模型参数("master"参数)。您可能需要同时传入`--scale_loss`来解决fp16训练的精度问题,通常传入`--scale_loss=8.0`即可。
注意,目前混合精度训练不能和内存优化功能同时使用,所以需要传`--with_mem_opt=False`这个参数来禁用内存优化功能。
### CE测试
注意:CE相关代码仅用于内部测试,enable_ce默认设置False。
......
......@@ -46,6 +46,7 @@ def parse_args():
add_arg('class_dim', int, 1000, "Class number.")
add_arg('image_shape', str, "3,224,224", "input image size")
add_arg('model_save_dir', str, "output", "model save directory")
add_arg('with_mem_opt', bool, False, "Whether to use memory optimization or not.")
add_arg('pretrained_model', str, None, "Whether to use pretrained model.")
add_arg('checkpoint', str, None, "Whether to resume checkpoint.")
add_arg('lr', float, 0.1, "set learning rate.")
......
......@@ -7,6 +7,7 @@ python train.py \
--class_dim=1000 \
--image_shape=3,224,224 \
--model_save_dir=output/ \
--with_mem_opt=True \
--lr_strategy=cosine_decay \
--lr=0.1 \
--num_epochs=200 \
......@@ -21,6 +22,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.01 \
......@@ -37,6 +39,7 @@ python train.py \
# --model_save_dir=output/ \
# --lr=0.02 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --l2_decay=1e-4
#SqueezeNet1_1
......@@ -50,6 +53,7 @@ python train.py \
# --model_save_dir=output/ \
# --lr=0.02 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --l2_decay=1e-4
#VGG11:
......@@ -63,6 +67,7 @@ python train.py \
# --model_save_dir=output/ \
# --lr=0.1 \
# --num_epochs=90 \
# --with_mem_opt=True \
# --l2_decay=2e-4
#VGG13:
......@@ -76,6 +81,7 @@ python train.py \
# --lr=0.01 \
# --num_epochs=90 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --l2_decay=3e-4
#VGG16:
......@@ -89,6 +95,7 @@ python train.py \
# --model_save_dir=output/ \
# --lr=0.01 \
# --num_epochs=90 \
# --with_mem_opt=True \
# --l2_decay=3e-4
#VGG19:
......@@ -101,6 +108,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.01 \
# --num_epochs=90 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=3e-4
......@@ -112,6 +120,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1 \
......@@ -125,6 +134,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --num_epochs=240 \
# --lr=0.1 \
......@@ -140,6 +150,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --num_epochs=240 \
# --lr=0.1 \
......@@ -155,6 +166,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --num_epochs=240 \
# --lr=0.1 \
......@@ -168,6 +180,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --num_epochs=240 \
# --lr=0.1 \
......@@ -181,6 +194,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --num_epochs=240 \
# --lr=0.1 \
......@@ -194,6 +208,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --num_epochs=240 \
# --lr=0.5 \
......@@ -210,6 +225,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --num_epochs=240 \
# --lr=0.5 \
......@@ -226,6 +242,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --num_epochs=240 \
# --lr=0.5 \
......@@ -242,6 +259,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --num_epochs=240 \
# --lr=0.5 \
......@@ -256,6 +274,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --num_epochs=240 \
# --lr=0.25 \
......@@ -271,6 +290,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --num_epochs=240 \
# --lr=0.25 \
......@@ -284,6 +304,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_warmup_decay \
# --lr=0.5 \
# --num_epochs=240 \
......@@ -297,6 +318,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=120 \
......@@ -310,6 +332,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=120 \
......@@ -323,6 +346,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1 \
......@@ -338,6 +362,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
......@@ -351,6 +376,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=7e-5 \
# --use_mixup=True \
......@@ -365,6 +391,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1 \
......@@ -380,6 +407,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -395,6 +423,7 @@ python train.py \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --lr_strategy=piecewise_decay \
# --with_mem_opt=True \
# --lr=0.1 \
# --num_epochs=120 \
# --l2_decay=1e-4
......@@ -409,6 +438,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -425,6 +455,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -441,6 +472,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -454,6 +486,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -470,6 +503,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -483,6 +517,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -499,6 +534,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -512,6 +548,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=15e-5
......@@ -525,6 +562,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -541,6 +579,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -554,6 +593,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=18e-5
......@@ -567,6 +607,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -580,6 +621,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -593,6 +635,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -606,6 +649,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -619,6 +663,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4
......@@ -633,6 +678,7 @@ python train.py \
# --model_save_dir=output/ \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --l2_decay=1.2e-4
#SE_ResNeXt101_32x4d:
......@@ -646,6 +692,7 @@ python train.py \
# --model_save_dir=output/ \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --l2_decay=1.5e-5
# SE_154
......@@ -658,6 +705,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -672,6 +720,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --lr=0.01 \
# --num_epochs=200 \
......@@ -687,6 +736,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.045 \
# --num_epochs=120 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --resize_short_size=320
......@@ -701,6 +751,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.045 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -718,6 +769,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --with_mem_opt=True \
# --model_save_dir=output/ \
# --l2_decay=1e-4 \
# --use_mixup=True \
......@@ -735,6 +787,7 @@ python train.py \
# --image_shape=3,224,224 \
# --lr=0.001 \
# --num_epochs=120 \
# --with_mem_opt=False \
# --model_save_dir=output/ \
# --lr_strategy=adam \
# --use_gpu=False
......
......@@ -443,6 +443,8 @@ def train(args):
use_ngraph = os.getenv('FLAGS_use_ngraph')
if not use_ngraph:
build_strategy = fluid.BuildStrategy()
# memopt may affect GC results
#build_strategy.memory_optimize = args.with_mem_opt
build_strategy.enable_inplace = args.with_inplace
#build_strategy.fuse_all_reduce_ops=1
......
......@@ -81,6 +81,7 @@ beta代表发音信息的权重。这表明,即使将绝大部分权重放在
--sort_type pool \
--pool_size 200000 \
--use_py_reader False \
--use_mem_opt False \
--enable_ce False \
--fetch_steps 1 \
pass_num 100 \
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册