【论文复现StyleGAN】The Op exp_grad doesn't have any grad op
Created by: cgq0816
环境: paddlepaddle 2.0b
代码实现: def R1Penalty(real_img, f): # gradient penalty
reals = real_img
reals.stop_gradient=False
real_logit = f(reals) #torch.Size([2, 1])
#apply_loss_scaling = lambda x: x.numpy() * np.exp(x.numpy() * [np.float32(np.log(2.0))])
#undo_loss_scaling = lambda x: x.numpy() * np.exp(-x.numpy() * [np.float32(np.log(2.0))])
apply_loss_scaling = lambda x: x * paddle.exp(x * paddle.to_tensor([np.float32(np.log(2.0))]))
undo_loss_scaling = lambda x: x * paddle.exp(-x * paddle.to_tensor([np.float32(np.log(2.0))]))
real_logit = apply_loss_scaling(paddle.sum(real_logit)) #torch.Size([1])
#real_logit=to_variable(real_logit)
real_grads = paddle.grad(real_logit, reals, grad_outputs=paddle.ones(real_logit.shape,dtype='float32'), create_graph=True)[0]
real_grads=fluid.layers.reshape(real_grads,shape=[reals.shape[0], -1])
real_grads = undo_loss_scaling(real_grads)
r1_penalty =paddle.sum(paddle.square(real_grads))
return r1_penalt
Error Message Summary:
NotFoundError: The Op exp_grad doesn't have any grad op. If you don't intend calculating higher order derivatives, please set create_graph
to False.
[Hint: double_grad_node should not be null.] (at /paddle/paddle/fluid/imperative/partial_grad_engine.cc:893)
请问一下这个怎么解决呢