Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • PaddleHub
  • Issue
  • #183

P
PaddleHub
  • 项目概览

PaddlePaddle / PaddleHub
大约 2 年 前同步成功

通知 285
Star 12117
Fork 2091
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 200
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 4
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
PaddleHub
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 200
    • Issue 200
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 4
    • 合并请求 4
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 9月 30, 2019 by saxon_zh@saxon_zhGuest

运行ernie二分类模型报错

Created by: buptzcW

报错如下,另外hub.Module(name='ernie)会出现无法加载模型的错误,添加了参数version之后不报错,但是在fitune_and_eval阶段报错

---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in ----> 1 cls_task.finetune_and_eval() /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in finetune_and_eval(self) 504 505 def finetune_and_eval(self): --> 506 return self.finetune(do_eval=True) 507 508 def finetune(self, do_eval=False): /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in finetune(self, do_eval) 509 # Start to finetune 510 with self.phase_guard(phase="train"): --> 511 self.init_if_necessary() 512 self._finetune_start_event() 513 run_states = [] /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in init_if_necessary(self) 166 if not self.is_checkpoint_loaded: 167 self.is_checkpoint_loaded = True --> 168 if not self.load_checkpoint(): 169 self.exe.run(self._base_startup_program) 170 /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in load_checkpoint(self) 487 self.config.checkpoint_dir, 488 self.exe, --> 489 main_program=self.main_program) 490 491 return is_load_successful /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in main_program(self) 331 def main_program(self): 332 if not self.env.is_inititalized: --> 333 self._build_env() 334 return self.env.main_program 335 /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in _build_env(self) 244 with fluid.unique_name.guard(self.env.UNG): 245 self.config.strategy.execute( --> 246 self.loss, self._base_data_reader, self.config) 247 248 if self.is_train_phase: /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/strategy.py in execute(self, loss, data_reader, config) 132 scheduled_lr = adam_weight_decay_optimization( 133 loss, warmup_steps, max_train_steps, self.learning_rate, --> 134 main_program, self.weight_decay, self.lr_scheduler) 135 136 return scheduled_lr /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/optimization.py in adam_weight_decay_optimization(loss, warmup_steps, num_train_steps, learning_rate, main_program, weight_decay, scheduler) 77 param_list[param.name].stop_gradient = True 78 ---> 79 _, param_grads = optimizer.minimize(loss) 80 81 if weight_decay > 0: </opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/decorator.py:decorator-gen-144> in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py in impl(func, args, kwargs) 23 def impl(func, args, *kwargs): 24 wrapped_func = decorator_func(func) ---> 25 return wrapped_func(args, *kwargs) 26 27 return impl /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py in impl(args, **kwargs) 85 def impl(args, kwargs): 86 with switch_tracer_mode_guard(is_train=False): ---> 87 return func(args, kwargs) 88 89 return impl /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) 592 startup_program=startup_program, 593 parameter_list=parameter_list, --> 594 no_grad_set=no_grad_set) 595 596 if grad_clip is not None and framework.in_dygraph_mode(): /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks) 491 with program_guard(program, startup_program): 492 params_grads = append_backward(loss, parameter_list, --> 493 no_grad_set, callbacks) 494 # Note: since we can't use all_reduce_op now, 495 # dgc_op should be the last op of one grad. /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks) 569 grad_to_var, 570 callbacks, --> 571 input_grad_names_set=input_grad_names_set) 572 573 # Because calc_gradient may be called multiple times, /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward_ops(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set) 308 # Getting op's corresponding grad_op 309 grad_op_desc, op_grad_to_var = core.get_grad_op_desc( --> 310 op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list) 311 312 # If input_grad_names_set is not None, extend grad_op_descs only when EnforceNotMet: Input ShapeTensor cannot be found in Op reshape2 at [/paddle/paddle/fluid/framework/op_desc.cc:306] PaddlePaddle Call Stacks: 0 0x7f7a2f9be750p void paddle::platform::EnforceNotMet::Init<char const>(char const, char const, int) + 352 1 0x7f7a2f9beac9p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const, int) + 137 2 0x7f7a2fb2fd7fp paddle::framework::OpDesc::Input(std::string const&) const + 207 3 0x7f7a2ffff62cp paddle::framework::details::OpInfoFiller<paddle::operators::Reshape2GradMaker, (paddle::framework::details::OpInfoFillType)2>::operator()(char const, paddle::framework::OpInfo) const::{lambda(paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&)#1 (closed)}::operator()(paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&) const + 540 4 0x7f7a2ffffba4p std::_Function_handler<std::vector<std::unique_ptr<paddle::framework::OpDesc, std::default_deletepaddle::framework::OpDesc >, std::allocator<std::unique_ptr<paddle::framework::OpDesc, std::default_deletepaddle::framework::OpDesc > > > (paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&), paddle::framework::details::OpInfoFiller<paddle::operators::Reshape2GradMaker, (paddle::framework::details::OpInfoFillType)2>::operator()(char const, paddle::framework::OpInfo) const::{lambda(paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&) + 20 5 0x7f7a2f9b76bap 6 0x7f7a2f9f1066p 7 0x7f7ab471a199p PyCFunction_Call + 233 8 0x7f7ab47b4dbep PyEval_EvalFrameEx + 31950 9 0x7f7ab47b74b6p 10 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 11 0x7f7ab47b74b6p 12 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 13 0x7f7ab47b74b6p 14 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 15 0x7f7ab47b74b6p 16 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 17 0x7f7ab46f6c33p 18 0x7f7ab46c533ap PyObject_Call + 106 19 0x7f7ab47af6eep PyEval_EvalFrameEx + 9726 20 0x7f7ab47b74b6p 21 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 22 0x7f7ab46f6c33p 23 0x7f7ab46c533ap PyObject_Call + 106 24 0x7f7ab47af6eep PyEval_EvalFrameEx + 9726 25 0x7f7ab47b74b6p 26 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 27 0x7f7ab47b74b6p 28 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 29 0x7f7ab47b74b6p 30 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 31 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 32 0x7f7ab47b74b6p 33 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 34 0x7f7ab47b74b6p 35 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 36 0x7f7ab46f6b56p 37 0x7f7ab46c533ap PyObject_Call + 106 38 0x7f7ab46e3172p 39 0x7f7ab471d51cp _PyObject_GenericGetAttrWithDict + 124 40 0x7f7ab47b1d2ap PyEval_EvalFrameEx + 19514 41 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 42 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 43 0x7f7ab47b74b6p 44 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 45 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 46 0x7f7ab47b74b6p 47 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 48 0x7f7ab47b75ebp PyEval_EvalCode + 59 49 0x7f7ab47aac5dp 50 0x7f7ab471a179p PyCFunction_Call + 201 51 0x7f7ab47b4dbep PyEval_EvalFrameEx + 31950 52 0x7f7ab46ee410p _PyGen_Send + 128 53 0x7f7ab47b3953p PyEval_EvalFrameEx + 26723 54 0x7f7ab46ee410p _PyGen_Send + 128 55 0x7f7ab47b3953p PyEval_EvalFrameEx + 26723 56 0x7f7ab46ee410p _PyGen_Send + 128 57 0x7f7ab47b4d60p PyEval_EvalFrameEx + 31856 58 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 59 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 60 0x7f7ab47b74b6p 61 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 62 0x7f7ab46f6c33p 63 0x7f7ab46c533ap PyObject_Call + 106 64 0x7f7ab47af6eep PyEval_EvalFrameEx + 9726 65 0x7f7ab47b74b6p 66 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 67 0x7f7ab46ed6bap 68 0x7f7ab47a8af6p 69 0x7f7ab471a179p PyCFunction_Call + 201 70 0x7f7ab47b4dbep PyEval_EvalFrameEx + 31950 71 0x7f7ab47b74b6p 72 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 73 0x7f7ab46ed6bap 74 0x7f7ab47a8af6p 75 0x7f7ab471a179p PyCFunction_Call + 201 76 0x7f7ab47b4dbep PyEval_EvalFrameEx + 31950 77 0x7f7ab47b74b6p 78 0x7f7ab47b45b5p PyEval_EvalFrameEx + 29893 79 0x7f7ab46ed6bap 80 0x7f7ab47a8af6p 81 0x7f7ab471a179p PyCFunction_Call + 201 82 0x7f7ab47b4dbep PyEval_EvalFrameEx + 31950 83 0x7f7ab47b74b6p 84 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 85 0x7f7ab46f6b56p 86 0x7f7ab46c533ap PyObject_Call + 106 87 0x7f7ab47af6eep PyEval_EvalFrameEx + 9726 88 0x7f7ab46ee410p _PyGen_Send + 128 89 0x7f7ab47b4d60p PyEval_EvalFrameEx + 31856 90 0x7f7ab47b51d0p PyEval_EvalFrameEx + 32992 91 0x7f7ab47b74b6p 92 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 93 0x7f7ab46f6c33p 94 0x7f7ab46c533ap PyObject_Call + 106 95 0x7f7ab47af6eep PyEval_EvalFrameEx + 9726 96 0x7f7ab47b74b6p 97 0x7f7ab47b75a8p PyEval_EvalCodeEx + 72 98 0x7f7ab46f6b56p 99 0x7f7ab46c533ap PyObject_Call + 106

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/PaddleHub#183
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7