Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #23206

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 3月 25, 2020 by saxon_zh@saxon_zhGuest

在AIStudio上跑着跑着报错了。

Created by: ydchen9

EnforceNotMetTraceback (most recent call last) in () 563 save_dirname=save_dirname, 564 model_filename=model_filename, --> 565 params_filename=params_filename) 566 # batch_infer(save_dirname) 567 # infer( in train(nn_type, save_dirname, model_filename, params_filename) 222 # optimizer = fluid.optimizer.Adam(learning_rate=fluid.layers.piecewise_decay(boundaries=boundaries, values=values)) 223 optimizer = fluid.optimizer.Adam(learning_rate=0.001) --> 224 optimizer.minimize(avg_loss) 225 226 # place = fluid.CPUPlace() </opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/decorator.pyc:decorator-gen-142> in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/wrapped_decorator.pyc in impl(func, *args, **kwargs) 23 def impl(func, *args, **kwargs): 24 wrapped_func = decorator_func(func) ---> 25 return wrapped_func(*args, **kwargs) 26 27 return impl /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/dygraph/base.pyc in impl(*args, **kwargs) 85 def impl(*args, **kwargs): 86 with switch_tracer_mode_guard(is_train=False): ---> 87 return func(*args, kwargs) 88 89 return impl /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/optimizer.pyc in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) 592 startup_program=startup_program, 593 parameter_list=parameter_list, --> 594 no_grad_set=no_grad_set) 595 596 if grad_clip is not None and framework.in_dygraph_mode(): /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/optimizer.pyc in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks) 491 with program_guard(program, startup_program): 492 params_grads = append_backward(loss, parameter_list, --> 493 no_grad_set, callbacks) 494 # Note: since we can't use all_reduce_op now, 495 # dgc_op should be the last op of one grad. /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/backward.pyc in append_backward(loss, parameter_list, no_grad_set, callbacks) 569 grad_to_var, 570 callbacks, --> 571 input_grad_names_set=input_grad_names_set) 572 573 # Because calc_gradient may be called multiple times, /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/backward.pyc in append_backward_ops(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set) 308 # Getting op's corresponding grad_op 309 grad_op_desc, op_grad_to_var = core.get_grad_op_desc( --> 310 op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list) 311 312 # If input_grad_names_set is not None, extend grad_op_descs only when EnforceNotMet: grad_op_maker_ should not be null Operator GradOpMaker has not been registered. at [/paddle/paddle/fluid/framework/op_info.h:69] PaddlePaddle Call Stacks: 0 0x7f47250eeff8p void paddle::platform::EnforceNotMet::Initstd::string(std::string, char const, int) + 360 1 0x7f47250ef347p paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const, int) + 87 2 0x7f47250f030cp paddle::framework::OpInfo::GradOpMaker() const + 108 3 0x7f47250e78eep 4 0x7f4725121936p 5 0x7f47a5da9ea4p PyEval_EvalFrameEx + 32020 6 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 7 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 8 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 9 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 10 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 11 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 12 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 13 0x7f47a5d34567p 14 0x7f47a5d0f973p PyObject_Call + 67 15 0x7f47a5da469ep PyEval_EvalFrameEx + 9486 16 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 17 0x7f47a5d34567p 18 0x7f47a5d0f973p PyObject_Call + 67 19 0x7f47a5da469ep PyEval_EvalFrameEx + 9486 20 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 21 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 22 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 23 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 24 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 25 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 26 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 27 0x7f47a5dab8eap PyEval_EvalCode + 26 28 0x7f47a5da7cc0p PyEval_EvalFrameEx + 23344 29 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 30 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 31 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 32 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 33 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 34 0x7f47a5d34567p 35 0x7f47a5d0f973p PyObject_Call + 67 36 0x7f47a5da469ep PyEval_EvalFrameEx + 9486 37 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 38 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 39 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 40 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 41 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 42 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 43 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 44 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 45 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 46 0x7f47a5d34567p 47 0x7f47a5d0f973p PyObject_Call + 67 48 0x7f47a5da469ep PyEval_EvalFrameEx + 9486 49 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 50 0x7f47a5d34567p 51 0x7f47a5d0f973p PyObject_Call + 67 52 0x7f47a5da469ep PyEval_EvalFrameEx + 9486 53 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 54 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 55 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 56 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 57 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 58 0x7f47a5d34567p 59 0x7f47a5d0f973p PyObject_Call + 67 60 0x7f47a5da469ep PyEval_EvalFrameEx + 9486 61 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 62 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 63 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 64 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 65 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 66 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 67 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 68 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 69 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 70 0x7f47a5dab8eap PyEval_EvalCode + 26 71 0x7f47a5da7cc0p PyEval_EvalFrameEx + 23344 72 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 73 0x7f47a5da8b98p PyEval_EvalFrameEx + 27144 74 0x7f47a5dab6c9p PyEval_EvalCodeEx + 2025 75 0x7f47a5d3447ap 76 0x7f47a5d0f973p PyObject_Call + 67 77 0x7f47a5dd8320p 78 0x7f47a5dd89dep Py_Main + 1374 79 0x7f47a4fa9830p __libc_start_main + 240 80 0x562b7899707fp

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#23206
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7