Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #20270

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 10月 09, 2019 by saxon_zh@saxon_zhGuest

ai studio中cls_task.finetune_and_eval() 时出错

Created by: ghm

硬件信息: GPU:v100 显存:16GB CPU:8 RAM:32GB 环境配置: Python版本:python 3.5 框架版本:PaddlePaddle 1.5.1 代码: from paddle.fluid.framework import switch_main_program import paddlehub as hub import paddle.fluid as fluid

module = hub.Module(name="ernie",version='1.0.2') dataset = hub.dataset.ChnSentiCorp()

reader = hub.reader.ClassifyReader( dataset=dataset, vocab_path=module.get_vocab_path(), max_seq_len=128)

strategy = hub.AdamWeightDecayStrategy( weight_decay=0.01, warmup_proportion=0.1, learning_rate=5e-5, lr_scheduler="linear_decay", optimizer_name="adam")

config = hub.RunConfig( use_cuda=True, num_epoch=1, checkpoint_dir="ernie_txt_cls_turtorial_demo", batch_size=32, log_interval=10, eval_interval=50, strategy=strategy)

inputs, outputs, program = module.context( trainable=True, max_seq_len=128)

Use "pooled_output" for classification tasks on an entire sentence.

pooled_output = outputs["pooled_output"]

feed_list = [ inputs["input_ids"].name, inputs["position_ids"].name, inputs["segment_ids"].name, inputs["input_mask"].name, ]

cls_task = hub.TextClassifierTask( data_reader=reader, feature=pooled_output, feed_list=feed_list, num_classes=dataset.num_labels, config=config)

cls_task.finetune_and_eval() #这句话执行出错 错误信息: ---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in ----> 1 cls_task.finetune_and_eval() /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in finetune_and_eval(self) 504 505 def finetune_and_eval(self): --> 506 return self.finetune(do_eval=True) 507 508 def finetune(self, do_eval=False): /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in finetune(self, do_eval) 509 # Start to finetune 510 with self.phase_guard(phase="train"): --> 511 self.init_if_necessary() 512 self._finetune_start_event() 513 run_states = [] /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in init_if_necessary(self) 166 if not self.is_checkpoint_loaded: 167 self.is_checkpoint_loaded = True --> 168 if not self.load_checkpoint(): 169 self.exe.run(self._base_startup_program) 170 /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in load_checkpoint(self) 487 self.config.checkpoint_dir, 488 self.exe, --> 489 main_program=self.main_program) 490 491 return is_load_successful /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in main_program(self) 331 def main_program(self): 332 if not self.env.is_inititalized: --> 333 self._build_env() 334 return self.env.main_program 335 /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in _build_env(self) 244 with fluid.unique_name.guard(self.env.UNG): 245 self.config.strategy.execute( --> 246 self.loss, self._base_data_reader, self.config) 247 248 if self.is_train_phase: /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/strategy.py in execute(self, loss, data_reader, config) 132 scheduled_lr = adam_weight_decay_optimization( 133 loss, warmup_steps, max_train_steps, self.learning_rate, --> 134 main_program, self.weight_decay, self.lr_scheduler) 135 136 return scheduled_lr /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/optimization.py in adam_weight_decay_optimization(loss, warmup_steps, num_train_steps, learning_rate, main_program, weight_decay, scheduler) 77 param_list[param.name].stop_gradient = True 78 ---> 79 _, param_grads = optimizer.minimize(loss) 80 81 if weight_decay > 0: </opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/decorator.py:decorator-gen-144> in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py in impl(func, args, kwargs) 23 def impl(func, args, *kwargs): 24 wrapped_func = decorator_func(func) ---> 25 return wrapped_func(args, *kwargs) 26 27 return impl /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py in impl(args, **kwargs) 85 def impl(args, kwargs): 86 with switch_tracer_mode_guard(is_train=False): ---> 87 return func(args, kwargs) 88 89 return impl /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) 592 startup_program=startup_program, 593 parameter_list=parameter_list, --> 594 no_grad_set=no_grad_set) 595 596 if grad_clip is not None and framework.in_dygraph_mode(): /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks) 491 with program_guard(program, startup_program): 492 params_grads = append_backward(loss, parameter_list, --> 493 no_grad_set, callbacks) 494 # Note: since we can't use all_reduce_op now, 495 # dgc_op should be the last op of one grad. /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks) 569 grad_to_var, 570 callbacks, --> 571 input_grad_names_set=input_grad_names_set) 572 573 # Because calc_gradient may be called multiple times, /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward_ops(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set) 308 # Getting op's corresponding grad_op 309 grad_op_desc, op_grad_to_var = core.get_grad_op_desc( --> 310 op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list) 311 312 # If input_grad_names_set is not None, extend grad_op_descs only when EnforceNotMet: Input ShapeTensor cannot be found in Op reshape2 at [/paddle/paddle/fluid/framework/op_desc.cc:306] PaddlePaddle Call Stacks: 0 0x7f1e9ac86750p void paddle::platform::EnforceNotMet::Init<char const>(char const, char const, int) + 352 1 0x7f1e9ac86ac9p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const, int) + 137 2 0x7f1e9adf7d7fp paddle::framework::OpDesc::Input(std::string const&) const + 207 3 0x7f1e9b2c762cp paddle::framework::details::OpInfoFiller<paddle::operators::Reshape2GradMaker, (paddle::framework::details::OpInfoFillType)2>::operator()(char const, paddle::framework::OpInfo) const::{lambda(paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&)#1 (closed)}::operator()(paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&) const + 540 4 0x7f1e9b2c7ba4p std::_Function_handler<std::vector<std::unique_ptr<paddle::framework::OpDesc, std::default_deletepaddle::framework::OpDesc >, std::allocator<std::unique_ptr<paddle::framework::OpDesc, std::default_deletepaddle::framework::OpDesc > > > (paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&), paddle::framework::details::OpInfoFiller<paddle::operators::Reshape2GradMaker, (paddle::framework::details::OpInfoFillType)2>::operator()(char const, paddle::framework::OpInfo) const::{lambda(paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&)#1 (closed)}>::_M_invoke(std::_Any_data const&, paddle::framework::OpDesc const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >, std::vector<paddle::framework::BlockDesc, std::allocatorpaddle::framework::BlockDesc* > const&) + 20 5 0x7f1e9ac7f6bap 6 0x7f1e9acb9066p 7 0x7f1f1ecf7199p PyCFunction_Call + 233 8 0x7f1f1ed91dbep PyEval_EvalFrameEx + 31950 9 0x7f1f1ed944b6p 10 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 11 0x7f1f1ed944b6p 12 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 13 0x7f1f1ed944b6p 14 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 15 0x7f1f1ed944b6p 16 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 17 0x7f1f1ecd3c33p 18 0x7f1f1eca233ap PyObject_Call + 106 19 0x7f1f1ed8c6eep PyEval_EvalFrameEx + 9726 20 0x7f1f1ed944b6p 21 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 22 0x7f1f1ecd3c33p 23 0x7f1f1eca233ap PyObject_Call + 106 24 0x7f1f1ed8c6eep PyEval_EvalFrameEx + 9726 25 0x7f1f1ed944b6p 26 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 27 0x7f1f1ed944b6p 28 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 29 0x7f1f1ed944b6p 30 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 31 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 32 0x7f1f1ed944b6p 33 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 34 0x7f1f1ed944b6p 35 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 36 0x7f1f1ecd3b56p 37 0x7f1f1eca233ap PyObject_Call + 106 38 0x7f1f1ecc0172p 39 0x7f1f1ecfa51cp _PyObject_GenericGetAttrWithDict + 124 40 0x7f1f1ed8ed2ap PyEval_EvalFrameEx + 19514 41 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 42 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 43 0x7f1f1ed944b6p 44 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 45 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 46 0x7f1f1ed944b6p 47 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 48 0x7f1f1ed945ebp PyEval_EvalCode + 59 49 0x7f1f1ed87c5dp 50 0x7f1f1ecf7179p PyCFunction_Call + 201 51 0x7f1f1ed91dbep PyEval_EvalFrameEx + 31950 52 0x7f1f1eccb410p _PyGen_Send + 128 53 0x7f1f1ed90953p PyEval_EvalFrameEx + 26723 54 0x7f1f1eccb410p _PyGen_Send + 128 55 0x7f1f1ed90953p PyEval_EvalFrameEx + 26723 56 0x7f1f1eccb410p _PyGen_Send + 128 57 0x7f1f1ed91d60p PyEval_EvalFrameEx + 31856 58 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 59 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 60 0x7f1f1ed944b6p 61 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 62 0x7f1f1ecd3c33p 63 0x7f1f1eca233ap PyObject_Call + 106 64 0x7f1f1ed8c6eep PyEval_EvalFrameEx + 9726 65 0x7f1f1ed944b6p 66 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 67 0x7f1f1ecca6bap 68 0x7f1f1ed85af6p 69 0x7f1f1ecf7179p PyCFunction_Call + 201 70 0x7f1f1ed91dbep PyEval_EvalFrameEx + 31950 71 0x7f1f1ed944b6p 72 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 73 0x7f1f1ecca6bap 74 0x7f1f1ed85af6p 75 0x7f1f1ecf7179p PyCFunction_Call + 201 76 0x7f1f1ed91dbep PyEval_EvalFrameEx + 31950 77 0x7f1f1ed944b6p 78 0x7f1f1ed915b5p PyEval_EvalFrameEx + 29893 79 0x7f1f1ecca6bap 80 0x7f1f1ed85af6p 81 0x7f1f1ecf7179p PyCFunction_Call + 201 82 0x7f1f1ed91dbep PyEval_EvalFrameEx + 31950 83 0x7f1f1ed944b6p 84 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 85 0x7f1f1ecd3b56p 86 0x7f1f1eca233ap PyObject_Call + 106 87 0x7f1f1ed8c6eep PyEval_EvalFrameEx + 9726 88 0x7f1f1eccb410p _PyGen_Send + 128 89 0x7f1f1ed91d60p PyEval_EvalFrameEx + 31856 90 0x7f1f1ed921d0p PyEval_EvalFrameEx + 32992 91 0x7f1f1ed944b6p 92 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 93 0x7f1f1ecd3c33p 94 0x7f1f1eca233ap PyObject_Call + 106 95 0x7f1f1ed8c6eep PyEval_EvalFrameEx + 9726 96 0x7f1f1ed944b6p 97 0x7f1f1ed945a8p PyEval_EvalCodeEx + 72 98 0x7f1f1ecd3b56p 99 0x7f1f1eca233ap PyObject_Call + 106

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#20270
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7