Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • Paddle
  • Issue
  • #19492

P
Paddle
  • 项目概览

PaddlePaddle / Paddle
大约 2 年 前同步成功

通知 2325
Star 20933
Fork 5424
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 1423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 1,423
    • Issue 1,423
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 543
    • 合并请求 543
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 8月 28, 2019 by saxon_zh@saxon_zhGuest

使用CPU训练官方的DeepASR模型出错

Created by: gofreelee

  • 版本、环境信息:    1)PaddlePaddle版本:paddle 1.5.1    2)CPU:4 Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz    4)系统环境: Ubuntu 16.04.6 LTS Python 2.7.16

  • 训练信息    1)单机

  • 复现信息:我使用官方的DeepASR模型,在train.py文件中,修改默认"GPU"为"CPU",约在77行左右. 然后修改了66行左右的pass_num的default,从100改成了1. 而后 cd examples/aishell 文件夹, sh train.sh

  • 问题描述: share_vars_from is set, scope is ignored. I0828 16:23:43.375494 24403 parallel_executor.cc:329] The number of CPUPlace, which is used in ParallelExecutor, is 1. And the Program will be copied 1 copies I0828 16:23:43.376402 24403 build_strategy.cc:340] SeqOnlyAllReduceOps:0, num_trainers:1 Process SyncManager-2: Traceback (most recent call last): File "/usr/lib/python2.7/multiprocessing/process.py", line 267, in bootstrap self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run self.target(*self.args, **self.kwargs) File "/usr/lib/python2.7/multiprocessing/managers.py", line 558, in run_server server.serve_forever() File "/usr/lib/python2.7/multiprocessing/managers.py", line 184, in serve_forever t.start() File "/usr/lib/python2.7/threading.py", line 736, in start start_new_thread(self.bootstrap, ()) error: can't start new thread Traceback (most recent call last): File "../../train.py", line 372, in train(args) File "../../train.py", line 304, in train return_numpy=False) File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/parallel_executor.py", line 280, in run return_numpy=return_numpy) File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/executor.py", line 666, in run return_numpy=return_numpy) File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/executor.py", line 528, in run_parallel exe.run(fetch_var_names, fetch_var_name) paddle.fluid.core_avx.EnforceNotMet: Invoke operator relu_grad error. Python Callstacks: File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/framework.py", line 1771, in append_op attrs=kwargs.get("attrs", None)) File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op return self.main_program.current_block().append_op(args, kwargs) File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/layer_helper.py", line 159, in append_activation attrs=act) File "/home/handsomelee/.local/lib/python2.7/site-packages/paddle/fluid/layers/nn.py", line 2186, in conv2d return helper.append_activation(pre_act) File "/home/handsomelee/code/python/DeepASR/model_utils/model.py", line 38, in stacked_lstmp_model act="relu") File "../../train.py", line 168, in train class_num=args.class_num) File "../../train.py", line 372, in train(args) C++ Callstacks: Enforce failed. Expected posix_memalign(&p, alignment, size) == 0, but received posix_memalign(&p, alignment, size):12 != 0:0. Alloc 811347968 error! at [/paddle/paddle/fluid/memory/detail/system_allocator.cc:57] PaddlePaddle Call Stacks: 0 0x7f76ef242bf8p void paddle::platform::EnforceNotMet::Initstd::string(std::string, char const, int) + 360 1 0x7f76f060e417p paddle::memory::detail::AlignedMalloc(unsigned long) + 295 2 0x7f76f060e6d7p paddle::memory::detail::CPUAllocator::Alloc(unsigned long, unsigned long) + 39 3 0x7f76f060b1c9p paddle::memory::detail::BuddyAllocator::SystemAlloc(unsigned long) + 57 4 0x7f76f060c17cp paddle::memory::detail::BuddyAllocator::Alloc(unsigned long) + 332 5 0x7f76f05ecfe5p void paddle::memory::legacy::Allocpaddle::platform::CPUPlace(paddle::platform::CPUPlace const&, unsigned long) + 181 6 0x7f76f05edd15p paddle::memory::allocation::LegacyAllocator::AllocateImpl(unsigned long) + 405 7 0x7f76f05e9613p paddle::memory::allocation::AllocatorFacade::Alloc(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, unsigned long) + 227 8 0x7f76f05e980ep paddle::memory::allocation::AllocatorFacade::AllocShared(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, unsigned long) + 30 9 0x7f76f05cd74cp paddle::memory::AllocShared(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, unsigned long) + 44 10 0x7f76f05c6634p paddle::framework::Tensor::mutable_data(boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, paddle::framework::proto::VarType_Type, unsigned long) + 148 11 0x7f76efaa9cd1p 12 0x7f76efaf2795p paddle::operators::ActivationGradKernel<paddle::platform::CPUDeviceContext, paddle::operators::ReluGradFunctor >::Compute(paddle::framework::ExecutionContext const&) const + 133 13 0x7f76efaf28d3p std::Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::ActivationGradKernel<paddle::platform::CPUDeviceContext, paddle::operators::ReluGradFunctor >, paddle::operators::ActivationGradKernel<paddle::platform::CPUDeviceContext, paddle::operators::ReluGradFunctor > >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1 (closed)}>::M_invoke(std::Any_data const&, paddle::framework::ExecutionContext const&) + 35 14 0x7f76f058b627p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, paddle::framework::RuntimeContext*) const + 375 15 0x7f76f058bd91p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 529 16 0x7f76f0589c3bp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) + 267 17 0x7f76f037fba3p 18 0x7f76f037f83cp paddle::framework::details::ComputationOpHandle::RunImpl() + 124 19 0x7f76f03731fcp paddle::framework::details::OpHandleBase::Run(bool) + 28 20 0x7f76f0357756p paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync(paddle::framework::details::OpHandleBase*) + 310 21 0x7f76f03564bfp paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp(paddle::framework::details::OpHandleBase*, std::shared_ptr<paddle::framework::BlockingQueue > const&, unsigned long*) + 47 22 0x7f76f035687fp 23 0x7f76ef455183p std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Result_base::_Deleter>, void> >::_M_invoke(std::_Any_data const&) + 35 24 0x7f76ef304217p std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) + 39 25 0x7f771af3ba99p 26 0x7f76f0352122p 27 0x7f76ef3057d4p ThreadPool::ThreadPool(unsigned long)::{lambda()#1 (closed)}::operator()() const + 404 28 0x7f770210dc80p 29 0x7f771af346bap 30 0x7f771ac6a41dp clone + 109

Segmentation fault (core dumped) handsomelee@handsomelee-Lenovo-Xi

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/Paddle#19492
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7