paddlehub的bert的sequence_out(variable,lod-tensor)如何转换为lod_level=1,以便后接dynamic_lstm??
Created by: wxl1351641822
我考虑了layers的lod_append,但他并不能改变我的seq_output 考虑使用lod_reset: 他在一般的variable上可以用,但在sequence_output上用不了 set_lod也不管用
import paddle.fluid as fluid
seq_feature=fluid.layers.lod_reset(seq_output, target_lod=[1]*128)
print(seq_feature)
name: "@HUB_bert_chinese_L-12_H-768_A-12@@HUB_bert_chinese_L-12_H-768_A-12@layer_norm_24.tmp_2"
type {
type: LOD_TENSOR
lod_tensor {
tensor {
data_type: FP32
dims: -1
dims: 128
dims: 768
}
lod_level: 0
}
}
persistable: false
import paddle.fluid as fluid
x = fluid.layers.data(name='x1', shape=[128,768], lod_level=0)
print(x)
out = fluid.layers.lod_reset(x, target_lod=[128])
print(out)
name: "x1"
type {
type: LOD_TENSOR
lod_tensor {
tensor {
data_type: FP32
dims: -1
dims: 128
dims: 768
}
lod_level: 0
}
}
persistable: false
name: "lod_reset_1.tmp_0"
type {
type: LOD_TENSOR
lod_tensor {
tensor {
data_type: FP32
dims: -1
dims: 128
dims: 768
}
lod_level: 1
}
}
persistable: false
这是我想要实现的代码:
`import paddlehub as hub
max_seq_len=128
# dataset = MyDataset()
module = hub.Module(name="bert_chinese_L-12_H-768_A-12")
inputs, outputs, program = module.context(
trainable=True, max_seq_len=max_seq_len)
print(outputs)
# Use "pooled_output" for classification tasks on an entire sentence.
pooled_output = outputs["pooled_output"]
seq_output=outputs['sequence_output']
print(seq_output)
hidden_units=[128]
cls_feats = fluid.layers.dropout(
x=seq_output,
dropout_prob=0,
dropout_implementation="upscale_in_train")
print(cls_feats)
if hidden_units is not None:
for n_hidden in hidden_units:
# cls_feats = fluid.layers.fc(
# input=cls_feats, size=n_hidden, act=None)
# print(cls_feats)
cls_feats = fluid.layers.fc(input=cls_feats, size=n_hidden,
act=None, bias_attr=None)
print(cls_feats)
lstm_h, c = fluid.layers.dynamic_lstm(
input=cls_feats, size=n_hidden, is_reverse=False)
print(lstm_h)
# # 最大池化
lstm_max = fluid.layers.sequence_pool(input=lstm_h, pool_type='max')
print(lstm_max)
# # 激活函数
cls_feats = fluid.layers.tanh(lstm_max)
print(cls_feats)
logits = fluid.layers.fc(
input=cls_feats,
size=3,
param_attr=fluid.ParamAttr(
name="cls_out_w",
initializer=fluid.initializer.TruncatedNormal(scale=0.02)),
bias_attr=fluid.ParamAttr(
name="cls_out_b", initializer=fluid.initializer.Constant(0.)),
act="softmax")
ret_infers = fluid.layers.reshape(
x=fluid.layers.argmax(logits, axis=1), shape=[-1, 1])
----------------------
Error Message Summary:
----------------------
Error: Size of target LoD should be greater than 1.
[Hint: Expected level0.size() > 1UL, but received level0.size():1 <= 1UL:1.] at (/paddle/paddle/fluid/operators/lod_reset_op.h:60)
[operator < lod_reset > error]
`