[百度之星大赛]细分类微调时的bug
Created by: zky001
训练的问题: [2020-08-10 11:33:10] trainbatch 8020, lr 0.000050, loss 0.178964, time 0.08 sec [2020-08-10 11:34:00] trainbatch 8030, lr 0.000050, loss 0.168911, time 0.10 sec [2020-08-10 11:34:49] trainbatch 8040, lr 0.000050, loss 0.192038, time 0.10 sec [2020-08-10 11:35:41] trainbatch 8050, lr 0.000050, loss 0.196493, time 0.08 sec [2020-08-10 11:36:30] trainbatch 8060, lr 0.000050, loss 0.191304, time 0.08 sec [2020-08-10 11:37:18] trainbatch 8070, lr 0.000050, loss 0.193833, time 0.08 sec [2020-08-10 11:38:10] trainbatch 8080, lr 0.000050, loss 0.189738, time 0.08 sec [2020-08-10 11:39:00] trainbatch 8090, lr 0.000050, loss 0.204269, time 0.08 sec [2020-08-10 11:39:52] trainbatch 8100, lr 0.000050, loss 0.195286, time 0.09 sec 2020-08-10 11:40:16,391-WARNING: Your reader has raised an exception! Traceback (most recent call last): File "train_pair.py", line 238, in main() File "train_pair.py", line 234, in main Exception in thread Thread-1: Traceback (most recent call last): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1156, in thread_main six.reraise(*sys.exc_info()) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/six.py", line 703, in reraise raise value File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1136, in thread_main for tensors in self._tensor_reader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 203, in call yield self._done() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 191, in _done return [c.done() for c in self.converters] File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 191, in return [c.done() for c in self.converters] File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 156, in done arr = np.array(self.data, dtype=self.dtype) ValueError: could not broadcast input array from shape (3,64,64) into shape (3)
train_async(args)
File "train_pair.py", line 187, in train_async for train_batch in train_loader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1102, in next return self._reader.read_next() paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::operators::reader::BlockingQueue<std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor > >::Receive(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 3 paddle::operators::reader::PyReader::ReadNext(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 4 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Result_base::_Deleter>, unsigned long> >::_M_invoke(std::_Any_data const&) 5 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) 6 ThreadPool::ThreadPool(unsigned long)::{lambda()#1 (closed)}::operator()() const
Error Message Summary:
Error: Blocking queue is killed because the data reader raises an exception [Hint: Expected killed_ != true, but received killed_:1 == true:1.] at (/paddle/paddle/fluid/operators/reader/blocking_queue.h:141)