想给yolov3_darknet_voc 加上 data augment v1,报错,请问改怎么调整
Created by: yinggo
architecture: YOLOv3
use_gpu: true
max_iters: 70000
log_smooth_window: 20
save_dir: output
snapshot_iter: 2000
metric: VOC
map_type: 11point
pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/DarkNet53_pretrained.tar
weights: output/yolov3_darknet_voc/model_final
num_classes: 20
use_fine_grained_loss: false
YOLOv3:
backbone: DarkNet
yolo_head: YOLOv3Head
DarkNet:
norm_type: sync_bn
norm_decay: 0.
depth: 53
YOLOv3Head:
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
anchors: [[10, 13], [16, 30], [33, 23],
[30, 61], [62, 45], [59, 119],
[116, 90], [156, 198], [373, 326]]
norm_decay: 0.
yolo_loss: YOLOv3Loss
nms:
background_label: -1
keep_top_k: 100
nms_threshold: 0.45
nms_top_k: 1000
normalized: false
score_threshold: 0.01
YOLOv3Loss:
# batch_size here is only used for fine grained loss, not used
# for training batch_size setting, training batch_size setting
# is in configs/yolov3_reader.yml TrainReader.batch_size, batch
# size here should be set as same value as TrainReader.batch_size
batch_size: 8
ignore_thresh: 0.7
label_smooth: false
LearningRate:
base_lr: 0.001
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones:
- 55000
- 62000
- !LinearWarmup
start_factor: 0.
steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005
type: L2
_READER_: 'yolov3_reader.yml'
TrainReader:
inputs_def:
fields: ['image', 'gt_bbox', 'gt_class', 'gt_score']
num_max_boxes: 50
dataset:
!VOCDataSet
dataset_dir: dataset/voc
anno_path: trainval.txt
use_default_label: true
with_background: false
sample_transforms: #<---Add
- !DecodeImage
to_rgb: true
- !RandomFlipImage
prob: 0.5
- !AutoAugmentImage
autoaug_type: v1
- !NormalizeImage
is_channel_first: false
is_scale: true
mean: [0.485,0.456,0.406]
std: [0.229, 0.224,0.225]
batch_size: 2
use_process: true
EvalReader:
inputs_def:
fields: ['image', 'im_size', 'im_id', 'gt_bbox', 'gt_class', 'is_difficult']
num_max_boxes: 50
dataset:
!VOCDataSet
dataset_dir: dataset/voc
anno_path: test.txt
use_default_label: true
with_background: false
TestReader:
dataset:
!ImageFolder
use_default_label: true
with_background: false
2020-04-22 09:13:09,162-WARNING: Your reader has raised an exception! Exception in thread Thread-11: Traceback (most recent call last): File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 805, in thread_main six.reraise(*sys.exc_info()) File "/home/ds1/.local/lib/python3.7/site-packages/six.py", line 693, in reraise raise value File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 785, in thread_main for tensors in self._tensor_reader(): File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 853, in tensor_reader_impl for slots in paddle_reader(): File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 489, in reader_creator yield self.feed(item) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 330, in feed ret_dict[each_name] = each_converter.done() File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 139, in done arr = numpy.array(self.data, dtype=self.dtype) ValueError: setting an array element with a sequence.
I0422 09:13:09.174177 15043 parallel_executor.cc:440] The Program will be executed on CUDA using ParallelExecutor, 1 cards are used, so 1 programs are executed in parallel. I0422 09:13:09.553748 15043 build_strategy.cc:354] set enable_sequential_execution:1 I0422 09:13:09.553748 15044 build_strategy.cc:354] set enable_sequential_execution:1 W0422 09:13:09.817404 15044 fuse_all_reduce_op_pass.cc:74] Find all_reduce operators: 249. To make the speed faster, some all_reduce ops are fused during training, after fusion, the number of all_reduce ops is 183. I0422 09:13:09.834228 15044 build_strategy.cc:365] SeqOnlyAllReduceOps:0, num_trainers:2 W0422 09:13:09.838608 15043 fuse_all_reduce_op_pass.cc:74] Find all_reduce operators: 249. To make the speed faster, some all_reduce ops are fused during training, after fusion, the number of all_reduce ops is 183. I0422 09:13:09.856608 15043 build_strategy.cc:365] SeqOnlyAllReduceOps:0, num_trainers:2 I0422 09:13:10.102484 15043 parallel_executor.cc:307] Inplace strategy is enabled, when build_strategy.enable_inplace = True I0422 09:13:10.102772 15044 parallel_executor.cc:307] Inplace strategy is enabled, when build_strategy.enable_inplace = True I0422 09:13:10.131544 15043 parallel_executor.cc:375] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0 I0422 09:13:10.131887 15044 parallel_executor.cc:375] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0 2020-04-22 09:13:10,167-WARNING: Your reader has raised an exception! /home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py:782: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "tools/train.py", line 364, in main() File "tools/train.py", line 243, in main outs = exe.run(compiled_train_prog, fetch_list=train_values) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 783, in run six.reraise(*sys.exc_info()) File "/home/ds1/.local/lib/python3.7/site-packages/six.py", line 693, in reraise raise value File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 778, in run use_program_cache=use_program_cache) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 843, in _run_impl return_numpy=return_numpy) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 677, in _run_parallel tensors = exe.run(fetch_var_names)._move_to_list() paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::operators::reader::BlockingQueue<std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor > >::Receive(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 3 paddle::operators::reader::PyReader::ReadNext(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 4 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Result_base::_Deleter>, unsigned long> >::_M_invoke(std::_Any_data const&) 5 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) 6 ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
Python Call Stacks (More useful to users):
File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op attrs=kwargs.get("attrs", None)) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 733, in _init_non_iterable outputs={'Out': self._feed_list}) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 646, in init self._init_non_iterable() File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 280, in from_generator iterable, return_list) File "/home/ds1/anaconda3/envs/paddle/PaddleDetection/ppdet/modeling/architectures/yolov3.py", line 152, in build_inputs iterable=iterable) if use_dataloader else None File "tools/train.py", line 123, in main feed_vars, train_loader = model.build_inputs(**inputs_def) File "tools/train.py", line 364, in main()
Error Message Summary:
Error: Blocking queue is killed because the data reader raises an exception [Hint: Expected killed_ != true, but received killed_:1 == true:1.] at (/paddle/paddle/fluid/operators/reader/blocking_queue.h:141) [operator < read > error] Exception in thread Thread-11: Traceback (most recent call last): File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 805, in thread_main six.reraise(*sys.exc_info()) File "/home/ds1/.local/lib/python3.7/site-packages/six.py", line 693, in reraise raise value File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 785, in thread_main for tensors in self._tensor_reader(): File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 853, in tensor_reader_impl for slots in paddle_reader(): File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 489, in reader_creator yield self.feed(item) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 330, in feed ret_dict[each_name] = each_converter.done() File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 139, in done arr = numpy.array(self.data, dtype=self.dtype) ValueError: setting an array element with a sequence.
/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py:782: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "tools/train.py", line 364, in main() File "tools/train.py", line 243, in main outs = exe.run(compiled_train_prog, fetch_list=train_values) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 783, in run six.reraise(*sys.exc_info()) File "/home/ds1/.local/lib/python3.7/site-packages/six.py", line 693, in reraise raise value File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 778, in run use_program_cache=use_program_cache) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 843, in _run_impl return_numpy=return_numpy) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/executor.py", line 677, in _run_parallel tensors = exe.run(fetch_var_names)._move_to_list() paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::operators::reader::BlockingQueue<std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor > >::Receive(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 3 paddle::operators::reader::PyReader::ReadNext(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 4 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Result_base::_Deleter>, unsigned long> >::_M_invoke(std::_Any_data const&) 5 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) 6 ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
Python Call Stacks (More useful to users):
File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2525, in append_op attrs=kwargs.get("attrs", None)) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 733, in _init_non_iterable outputs={'Out': self._feed_list}) File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 646, in init self._init_non_iterable() File "/home/ds1/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/reader.py", line 280, in from_generator iterable, return_list) File "/home/ds1/anaconda3/envs/paddle/PaddleDetection/ppdet/modeling/architectures/yolov3.py", line 152, in build_inputs iterable=iterable) if use_dataloader else None File "tools/train.py", line 123, in main feed_vars, train_loader = model.build_inputs(**inputs_def) File "tools/train.py", line 364, in main()
Error Message Summary:
Error: Blocking queue is killed because the data reader raises an exception [Hint: Expected killed_ != true, but received killed_:1 == true:1.] at (/paddle/paddle/fluid/operators/reader/blocking_queue.h:141) [operator < read > error]