无法动态调整训练过程中的学习率
Created by: xuzhm
试图在训练过程中动态的调整学习率, 但是失败了,实验方式如下:
lr_layer = fluid.layers.piecewise_decay(boundaries=bd, values=lr) optimizer = fluid.optimizer.Momentum( learning_rate=lr_layer, momentum=0.9, regularization=fluid.regularizer.L2Decay(1e-4))
#通过find_var(lr_name).get_tensor 获取到学习率变量并重置 lr_tensor = fluid.global_scope().find_var(lr_name).get_tensor() print 'before', np.array(lr_tensor) cur_lr = 3000 cur_lr_np = np.array([cur_lr]) lr_tensor.set(cur_lr_np, place) print 'set lr', lr_name, cur_lr_np #这里显示设置成功了 print 'after', np.array(lr_tensor)
#但是实际通过optimizer._global_learning_rate() 得到的还是之前的值
print optimizer._global_learning_rate()