There is bug when use parallel executor and memory optimization in some case.
Created by: qingqing01
在人脸检测模型中,有2个复杂的loss,如下:
face_loss = fluid.layers.ssd_loss(
self.face_mbox_loc,
self.face_mbox_conf,
self.face_box,
self.gt_label,
self.prior_boxes,
self.box_vars,
overlap_threshold=0.35,
neg_overlap=0.35)
head_loss = fluid.layers.ssd_loss(
self.head_mbox_loc,
self.head_mbox_conf,
self.head_box,
self.gt_label,
self.prior_boxes,
self.box_vars,
overlap_threshold=0.35,
neg_overlap=0.35)
face_loss = fluid.layers.reduce_sum(face_loss)
head_loss = fluid.layers.reduce_sum(head_loss)
loss = face_loss + head_loss
使用了 fluid.memory_optimize,即使设置上面要fetch的face_loss, head_loss, loss的persistable=True,输出的loss结果依然不对。
optimizer.minimize(loss)
fluid.memory_optimize(fluid.default_main_program())
为了验证问题,做了一下实验:
- 将上面head_loss换成和face_loss一样的输入,这样,这两输出的loss会一模一样。
face_loss = fluid.layers.ssd_loss(
self.face_mbox_loc,
self.face_mbox_conf,
self.face_box,
self.gt_label,
self.prior_boxes,
self.box_vars,
overlap_threshold=0.35,
neg_overlap=0.35)
face_loss.persistable=True
head_loss = fluid.layers.ssd_loss(
self.face_mbox_loc,
self.face_mbox_conf,
self.face_box,
self.gt_label,
self.prior_boxes,
self.box_vars,
overlap_threshold=0.35,
neg_overlap=0.35)
#head_loss = fluid.layers.ssd_loss(
# self.head_mbox_loc,
# self.head_mbox_conf,
# self.head_box,
# self.gt_label,
# self.prior_boxes,
# self.box_vars,
# overlap_threshold=0.35,
# neg_overlap=0.35)
head_loss.persistable=True
face_loss = fluid.layers.reduce_sum(face_loss)
face_loss.persistable=True
head_loss = fluid.layers.reduce_sum(head_loss)
head_loss.persistable=True
loss = face_loss + head_loss
loss.persistable=True
- 在【不用】
fluid.memory_optimize
他时,输出的face_loss和fead_loss【相同】:
Pass 0, batch 0, face loss 14.2844762802, head loss 14.2844762802, time 1.67309403419
Pass 0, batch 1, face loss 11.3860425949, head loss 11.3860416412, time 3.79619002342
- 使用【ParallelExecutor +
fluid.memory_optimize
】, 也设置上述需要fetch的变量persistable=True,输出face_loss和head_loss输出【不同】:
Pass 0, batch 0, face loss 5.40768432617, head loss 10.0967769623, time 1.64687013626
Pass 0, batch 1, face loss 5.22169494629, head loss 8.87370109558, time 3.9501619339
- 使用【Executor +
fluid.memory_optimize
】也设置上述需要fetch的变量persistable=True,输出face_loss和head_loss输出【相同】。
Pass 0, batch 0, face loss 14.7560405731, head loss 14.7560405731, time 0.444567918777
Pass 0, batch 1, face loss 11.1143836975, head loss 11.1143836975, time 1.67649412155
Pass 0, batch 2, face loss 9.84147834778, head loss 9.84147834778, time 1.67893195152
所以,fluid.memory_optimize
有对graph的分析,复用一些Var。ParallelExecutor
中也有依赖一个SSA的graph来运行op。ParallelExecutor+fluid.memory_optimize
结合在这种复杂的、有很多分支的网络中有bug。