paddle 1.6.2 Param and Grad input of AdamOp should have same dimension
Created by: waaaaaaater
目前使用的版本是paddle1.6.2,网络模型是gan,优化器采用的是AdamOptimizer,用的是动态图。 迭代第一次的时候 可以运行,迭代第二次的时,在 optim_discriminator.minimize(loss_D,parameter_list=netD.parameters()) 反向传播优化过程中就会出现: Error: Param and Grad input of AdamOp should have same dimension [Hint: Expected param_dims == ctx->GetInputDim("Grad"), but received param_dims:64 != ctx->GetInputDim("Grad"):0.] at (/paddle/paddle/fluid/operators/optimizers/adam_op.cc:88)
是因为在代码中 valid = fl.layers.ones(shape=(lr_image.shape[0],*netD.output_shape),dtype='float32') fake = fl.layers.zeros(shape=(lr_image.shape[0],*netD.output_shape),dtype='float32') 创建了这两个tensor,默认的stop_gradient设置为True,而导致的吗?