Output of while_grad_op shouldn't drop the LoD
Created by: pkuyym
For variables in step_scopes->rbegin()
, while_op
would create a LoDTensor with same shape and same dtype as inside_tensor
. However, the LoD is not inherited from insider_tensor
, which results the output of while_grad_op being LoDTensor without LoD. However, in many situations, the LoD is necessary, for example:
x_tensor = fluid.layers.reorder_lod_tensor_by_rank(x, rank_table)
rnn = fluid.layers.DynamicRNN()
with rnn.block():
x_step = rnn.step_input(x_tensor)
Input gradients of reorder_lod_tensor_by_rank_grad
requires x
being LoDTensor with LoD, however, the while_op_grad
just dropped the LoD.
A possible fix is as following:
zero_op->Run(scope, dev_place);
auto* pg_tensor = scope.FindVar(pg_names[param_id])->GetMutable<framework::LoDTensor>();
pg_tensor->set_lod(inside_tensor.lod());
Inherit the LoD from inside_tensor
explicitly.