Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #25882

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 8月 02, 2020 by saxon_zh@saxon_zhGuest

【论文复现】尝试用dygraph实现GAN的gradient penalty时遇到'Inputs and outputs of elementwise_add_grad do not exist'错误

Created by: fengyuentau

版本、环境信息:    1)PaddlePaddle版本:AI-Studio中的Paddle-1.8    2)CPU:请提供CPU型号,AI-Studio    3)GPU:请提供GPU型号,AI-Studio    4)系统环境:AI-Studio 模型信息

  • 模型名称:论文复现营的GAN作业
  • 使用数据集名称:mnist
  • 使用算法名称:GAN

复现信息:

  • 实现了G和D,并且确保可以跑通
  • 在训练代码中尝试用fluid.dygraph下的接口实现gradient penalty,代码如下:
    # Gradient penalty
    alpha = fluid.dygraph.to_variable(np.random.rand(real_image.shape[0], 1, 1, 1).astype('float32'))
    x_hat = alpha * real_image + (1 - alpha) * fake_image
    gp_fake = d(x_hat)
    weight = fluid.layers.ones([x_hat.shape[0], 1], dtype='float32')
    dydx = fluid.dygraph.grad(outputs=gp_fake,
                                          inputs=x_hat,
                                          grad_outputs=weight,
                                          retain_graph=True,
                                          create_graph=True,
                                          only_inputs=True)[0]
     dydx = fluid.layers.reshape(dydx, [-1, dydx.shape[1]*dydx.shape[2]*dydx.shape[3]])
     dydx_l2norm = fluid.layers.sqrt(fluid.layers.reduce_sum(fluid.layers.square(dydx), dim=1) + 1e-16)
     gp = fluid.layers.reduce_mean(fluid.layers.square(dydx_l2norm - 1.0))
     # backward
     gp.backward()
     gp_d_optimizer.minimize(gp)
     d.clear_gradients()
  • 报错信息如下:
    ---------------------------------------------------------------------------EnforceNotMet                             Traceback (most recent call last)<ipython-input-7-8e0ae97c97dd> in <module>
    125         fluid.save_dygraph(fake_d_optimizer.state_dict(), model_path+'d_o_f')
    126 
    --> 127 train(mnist_generator, epoch_num=train_param['epoch'], batch_size=BATCH_SIZE, use_gpu=True) # 10
    <ipython-input-7-8e0ae97c97dd> in train(mnist_generator, epoch_num, batch_size, use_gpu, load_model)
     89                 gp = fluid.layers.reduce_mean(fluid.layers.square(dydx_l2norm - 1.0))
     90                 # backward
    ---> 91                 gp.backward()
     92                 gp_d_optimizer.minimize(gp)
     93                 d.clear_gradients()
    </opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/decorator.py:decorator-gen-195> in backward(self, backward_strategy)
    /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs)
     23     def __impl__(func, *args, **kwargs):
     24         wrapped_func = decorator_func(func)
    ---> 25         return wrapped_func(*args, **kwargs)
     26 
     27     return __impl__
    /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py in __impl__(*args, **kwargs)
    214         assert in_dygraph_mode(
    215         ), "We Only support %s in imperative mode, please use fluid.dygraph.guard() as context to run it in imperative Mode" % func.__name__
    --> 216         return func(*args, **kwargs)
    217 
    218     return __impl__
    /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/varbase_patch_methods.py in backward(self, backward_strategy)
    114                 backward_strategy.sort_sum_gradient = False
    115 
    --> 116             self._run_backward(backward_strategy, framework._dygraph_tracer())
    117         else:
    118             raise ValueError(
    EnforceNotMet: 
    
    --------------------------------------------
    C++ Call Stacks (More useful to developers):
    --------------------------------------------
    0   std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
    1   paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
    2   paddle::imperative::BasicEngine::PrepareDeps()
    3   paddle::imperative::BasicEngine::Execute()
    
    ----------------------
    Error Message Summary:
    ----------------------
    NotFoundError: Inputs and outputs of elementwise_add_grad do not exist. This may be because:
    1. You use some output variables of the previous batch as the inputs of the current batch. Please try to call "stop_gradient = True" or "detach()" for these variables.
    2. You calculate backward twice for the same subgraph without setting retain_graph=True. Please set retain_graph=True in the first backward call.
    
    
      [Hint: Expected ins_.empty() && outs_.empty() != true, but received ins_.empty() && outs_.empty():1 == true:1.] at (/paddle/paddle/fluid/imperative/op_base.h:145)

参考PaddleCV/gan/trainer/StarGAN.py和StarGAN-PyTorch实现。

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#25882
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7