设置fc层stop_gradient=True,训练报错
Created by: HJYgotoPLAY
paddle版本:1.7.1,python版本2.7,设置fc层stop_gradient=True,训练报错:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::operators::AdamOp::InferShape(paddle::framework::InferShapeContext*) const
3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
5 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
6 paddle::framework::details::ComputationOpHandle::RunImpl()
7 paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync(paddle::framework::details::OpHandleBase*)
8 paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp(paddle::framework::details::OpHandleBase*, std::shared_ptr<paddle::framework::BlockingQueue<unsigned long> > const&, unsigned long*)
9 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, void> >::_M_invoke(std::_Any_data const&)
10 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&)
11 ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op
attrs=kwargs.get("attrs", None))
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/optimizer.py", line 1867, in _append_optimize_op
stop_gradient=True)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/optimizer.py", line 529, in _create_optimization_pass
param_and_grad)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/optimizer.py", line 681, in apply_gradients
optimize_ops = self._create_optimization_pass(params_grads)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/optimizer.py", line 711, in apply_optimize
optimize_ops = self.apply_gradients(params_grads)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/optimizer.py", line 800, in minimize
loss, startup_program=startup_program, params_grads=params_grads)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/dygraph/base.py", line 100, in __impl__
return func(*args, **kwargs)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "</home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/decorator.pyc:decorator-gen-49>", line 2, in minimize
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddlepalm/optimizer/adam.py", line 49, in _build
_, param_grads = optimizer.minimize(self._loss)
File "/home/work/anaconda2/envs/paddle_gpu/lib/python2.7/site-packages/paddlepalm/trainer.py", line 309, in build_backward
param_grads = optimizer._build()
File "run_one.py", line 85, in <module>
trainer.build_backward(optimizer=adam, weight_decay=weight_decay)
----------------------
Error Message Summary:
----------------------
InvalidArgumentError: Param and Grad input of AdamOp should have same dimension. But received Param dims: [2], Grad dims: [0].
[Hint: Expected param_dims == ctx->GetInputDim("Grad"), but received param_dims:2 != ctx->GetInputDim("Grad"):0.] at (/paddle/paddle/fluid/operators/optimizers/adam_op.cc:106)
[operator < adam > error]