Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #23447

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
“73e328e57817db7f6a82c171206528cd4cfd1e66”上不存在“develop/api_doc/v2/fluid.html”
已关闭
开放中
Opened 4月 03, 2020 by saxon_zh@saxon_zhGuest

EnforceNotMet [Hint: Expected dim_a.width_ == dim_b.height_, but received dim_a.width_:2304 != dim_b.height_:9216.] at (/paddle/paddle/fluid/operators/math/blas_impl.h:747)

Created by: smartcai

   1)PaddlePaddle版本:paddlepaddle 1.7.0+paddlehub1.5.0 2)OS:Linux jupyter-262348-365692 4.4.0-166-generic #195-Ubuntu SMP Tue Oct 1 09:35:25 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux    3)CPU:Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz    4)GPU:NA    5)系统环境:Python3.7.4

Code: #定义网络结构 class MyDNN(fluid.dygraph.Layer): def init(self, name_scope, num_classes=10): super(MyDNN, self).init(name_scope) name_scope = self.full_name() self.conv1 = Conv2D(num_channels=3, num_filters=96, filter_size=11, stride=4, padding=5, act='relu') self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max') self.conv2 = Conv2D(num_channels=96, num_filters=256, filter_size=5, stride=1, padding=2, act='relu') self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max') self.conv3 = Conv2D(num_channels=256, num_filters=384, filter_size=3, stride=1, padding=1, act='relu') self.conv4 = Conv2D(num_channels=384, num_filters=384, filter_size=3, stride=1, padding=1, act='relu') self.conv5 = Conv2D(num_channels=384, num_filters=256, filter_size=3, stride=1, padding=1, act='relu') self.pool5 = Pool2D(pool_size=2, pool_stride=2, pool_type='max') self.fc1 = Linear(input_dim=9216, output_dim=4096, act='relu') self.drop_ratio1 = 0.5 self.fc2 = Linear(input_dim=4096, output_dim=4096, act='relu') self.drop_ratio2 = 0.5 self.fc3 = Linear(input_dim=4096, output_dim=num_classes)

def forward(self, x):
    x = self.conv1(x)
    x = self.pool1(x)
    x = self.conv2(x)
    x = self.pool2(x)
    x = self.conv3(x)
    x = self.conv4(x)
    x = self.conv5(x)
    x = self.pool5(x)
    x = fluid.layers.reshape(x, [x.shape[0], -1])
    x = self.fc1(x)
    x= fluid.layers.dropout(x, self.drop_ratio1)
    x = self.fc2(x)
    x = fluid.layers.dropout(x, self.drop_ratio2)
    x = self.fc3(x)
    return x

#用动态图进行训练 with fluid.dygraph.guard(): model=MyDNN('Alexnet11') #模型实例化 model.train() #训练模式 opt= fluid.optimizer.Momentum(learning_rate=0.001,momentum=0.9,parameter_list=model.parameters())

epochs_num=100 #迭代次数

for pass_num in range(epochs_num):
    
    for batch_id,data in enumerate(train_reader()):
        
        images=np.array([x[0].reshape(3,100,100) for x in data],np.float32)
        
        labels = np.array([x[1] for x in data]).astype('int64')
        labels = labels[:, np.newaxis]
        # print(images.shape)
        image=fluid.dygraph.to_variable(images)
        label=fluid.dygraph.to_variable(labels)
        predict=model(image)#预测
        # print(predict)
        loss=fluid.layers.cross_entropy(predict,label)
        avg_loss=fluid.layers.mean(loss)#获取loss值
        
        acc=fluid.layers.accuracy(predict,label)#计算精度
        
        if batch_id!=0 and batch_id%50==0:
            print("train_pass:{},batch_id:{},train_loss:{},train_acc:{}".format(pass_num,batch_id,avg_loss.numpy(),acc.numpy()))
        
        avg_loss.backward()
        opt.minimize(avg_loss)
        model.clear_gradients()
        
fluid.save_dygraph(model.state_dict(),'MyDNN')#保存模型

Error: ---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in 18 image=fluid.dygraph.to_variable(images) 19 label=fluid.dygraph.to_variable(labels) ---> 20 predict=model(image)#预测 21 # print(predict) 22 loss=fluid.layers.cross_entropy(predict,label) /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in call(self, *inputs, **kwargs) 302 self._built = True 303 --> 304 outputs = self.forward(*inputs, **kwargs) 305 return outputs 306 in forward(self, x) 29 x = self.pool5(x) 30 x = fluid.layers.reshape(x, [x.shape[0], -1]) ---> 31 x = self.fc1(x) 32 x= fluid.layers.dropout(x, self.drop_ratio1) 33 x = self.fc2(x) /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in call(self, *inputs, **kwargs) 302 self._built = True 303 --> 304 outputs = self.forward(*inputs, **kwargs) 305 return outputs 306 /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/nn.py in forward(self, input) 930 931 if in_dygraph_mode(): --> 932 outs = core.ops.matmul(inputs, attrs) 933 pre_bias = outs['Out'][0] 934 EnforceNotMet:


C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 void paddle::operators::math::Blaspaddle::platform::CPUDeviceContext::MatMul(paddle::framework::Tensor const&, paddle::operators::math::MatDescriptor const&, paddle::framework::Tensor const&, paddle::operators::math::MatDescriptor const&, float, paddle::framework::Tensor*, float) const 3 paddle::operators::MatMulKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 4 std::Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::MatMulKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::MatMulKernel<paddle::platform::CPUDeviceContext, double> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::M_invoke(std::Any_data const&, paddle::framework::ExecutionContext const&) 5 paddle::imperative::PreparedOp::Run(std::map<std::string, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > >, std::lessstd::string, std::allocator<std::pair<std::string const, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > > > > > const*, std::map<std::string, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > >, std::lessstd::string, std::allocator<std::pair<std::string const, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > > > > > const*, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator >, std::vector<float, std::allocator >, std::vector<std::string, std::allocatorstd::string >, bool, std::vector<bool, std::allocator >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocatorpaddle::framework::BlockDesc* >, std::vector<long, std::allocator >, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator >, std::vector<float, std::allocator >, std::vector<std::string, std::allocatorstd::string >, bool, std::vector<bool, std::allocator >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocatorpaddle::framework::BlockDesc* >, std::vector<long, std::allocator >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > > const*) 6 paddle::imperative::OpBase::Run(std::map<std::string, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > >, std::lessstd::string, std::allocator<std::pair<std::string const, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > > > > > const&, std::map<std::string, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > >, std::lessstd::string, std::allocator<std::pair<std::string const, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > > > > > const&) 7 paddle::imperative::Tracer::TraceOp(std::string const&, std::map<std::string, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > >, std::lessstd::string, std::allocator<std::pair<std::string const, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > > > > > const&, std::map<std::string, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > >, std::lessstd::string, std::allocator<std::pair<std::string const, std::vector<std::shared_ptrpaddle::imperative::VarBase, std::allocator<std::shared_ptrpaddle::imperative::VarBase > > > > > const&, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator >, std::vector<float, std::allocator >, std::vector<std::string, std::allocatorstd::string >, bool, std::vector<bool, std::allocator >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocatorpaddle::framework::BlockDesc* >, std::vector<long, std::allocator >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator >, std::vector<float, std::allocator >, std::vector<std::string, std::allocatorstd::string >, bool, std::vector<bool, std::allocator >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocatorpaddle::framework::BlockDesc* >, std::vector<long, std::allocator >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > >)


Error Message Summary:

Error: An error occurred here. There is no accurate error hint for this error yet. We are continuously in the process of increasing hint for this kind of error check. It would be helpful if you could inform us of how this conversion went by opening a github issue. And we will resolve it with high priority.

  • New issue link: https://github.com/PaddlePaddle/Paddle/issues/new
  • Recommended issue content: all error stack information [Hint: Expected dim_a.width_ == dim_b.height_, but received dim_a.width_:2304 != dim_b.height_:9216.] at (/paddle/paddle/fluid/operators/math/blas_impl.h:747)
指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#23447
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7