Paddle 1.6 在 CPU 上运行 Bert 模型时 InferShape 报错
Created by: Karbit
环境和模型信息: PaddlePaddle 1.6, 使用 pip 安装 Paddle Models, 1.6, https://github.com/PaddlePaddle/models/blob/release/1.6/PaddleNLP/PaddleLARK/BERT/ 复现: 将 train.sh 中的 use_cuda 改为 false, 然后执行 ./train.sh -local y. 报错信息:
------------------------------------------------
Traceback (most recent call last):
File "./train.py", line 439, in <module>
train(args)
File "./train.py", line 232, in train
bert_config=bert_config)
File "./train.py", line 124, in create_model
use_fp16=args.use_fp16)
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/bert.py", line 81, in __init__
self._build_model(src_ids, position_ids, sentence_ids, input_mask)
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/bert.py", line 139, in _build_m
name='encoder')
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 3
name=name + '_layer_' + str(i))
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 2
name=name + '_multi_head_att')
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 1
dropout_rate)
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 1
out = layers.matmul(weights, v)
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 6974, in matmul
'alpha': float(alpha),
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2426, in append_op
attrs=kwargs.get("attrs", None))
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/framework.py", line 1809, in __init__
self.desc.infer_shape(self.block.desc)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::operators::MatMulOp::InferShape(paddle::framework::InferShapeContext*) const
3 paddle::framework::OpDesc::InferShape(paddle::framework::BlockDesc const&) const
------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2426, in append_op
attrs=kwargs.get("attrs", None))
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/liujiaqiang/.jumbo/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 6974, in matmul
'alpha': float(alpha),
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 1
out = layers.matmul(weights, v)
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 1
dropout_rate)
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 2
name=name + '_multi_head_att')
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/transformer_encoder.py", line 3
name=name + '_layer_' + str(i))
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/bert.py", line 139, in _build_m
name='encoder')
File "/home/liujiaqiang/workdir/paddle-models/models-release-1.6/PaddleNLP/PaddleLARK/BERT/model/bert.py", line 81, in __init__
self._build_model(src_ids, position_ids, sentence_ids, input_mask)
File "./train.py", line 124, in create_model
use_fp16=args.use_fp16)
File "./train.py", line 232, in train
bert_config=bert_config)