yolov3 darknet53 voc 预测的时候报错
Created by: Stillremains
我在aistudio里跑的,但是预测出错 前面两步都没问题
!python -u tools/infer.py -c configs/yolov3_darknet_voc.yml
-o weights=output/yolov3_darknet_voc/best_model
--infer_img=dataset/voc/JPEGImages/red_1129.jpg
--output_dir=infer_output
W0424 20:42:33.665696 7488 device_context.cc:237] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 9.0 W0424 20:42:33.669750 7488 device_context.cc:245] device: 0, cuDNN Version: 7.3. 2020-04-24 20:42:35,290-INFO: Loading parameters from output/yolov3_darknet_voc/best_model... 2020-04-24 20:42:37,998-INFO: Load categories from dataset/voc/label_list.txt 2020-04-24 20:42:38,000-WARNING: Your reader has raised an exception! Exception in thread Thread-1: Traceback (most recent call last): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 805, in thread_main six.reraise(*sys.exc_info()) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/six.py", line 693, in reraise raise value File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 785, in thread_main for tensors in self._tensor_reader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 853, in tensor_reader_impl for slots in paddle_reader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 488, in reader_creator for item in reader(): File "/home/aistudio/Robot/to/clone/PaddleDetection/ppdet/data/reader.py", line 415, in _reader for _batch in reader: File "/home/aistudio/Robot/to/clone/PaddleDetection/ppdet/data/reader.py", line 301, in next return self.next() File "/home/aistudio/Robot/to/clone/PaddleDetection/ppdet/data/reader.py", line 308, in next batch = self._load_batch() File "/home/aistudio/Robot/to/clone/PaddleDetection/ppdet/data/reader.py", line 332, in _load_batch if _has_empty(sample['gt_bbox']): KeyError: 'gt_bbox'
Traceback (most recent call last): File "tools/infer.py", line 272, in main() File "tools/infer.py", line 182, in main for iter_id, data in enumerate(loader()): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 757, in next return self._reader.read_next() paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::operators::reader::BlockingQueue<std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor > >::Receive(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 3 paddle::operators::reader::PyReader::ReadNext(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 4 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Result_base::_Deleter>, unsigned long> >::_M_invoke(std::_Any_data const&) 5 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) 6 ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
Error Message Summary:
Error: Blocking queue is killed because the data reader raises an exception [Hint: Expected killed_ != true, but received killed_:1 == true:1.] at (/paddle/paddle/fluid/operators/reader/blocking_queue.h:141)