bmn的输入形状是否可以是其他的值
Created by: liu824
当我使用我的特征进行训练时,出现了下面的错误,我猜测是由于我的输入(433,1024)的.npy文件,与例子中的(100,400)不相同: [INFO: train.py: 249]: Namespace(batch_size=None, config='models-1.6/PaddleCV/PaddleVideo/configs/bmn.yaml', epoch=None, fix_random_seed=False, learning_rate=None, log_interval=10, model_name='BMN', no_memory_optimize=False, pretrain=None, resume=None, save_dir='./data/checkpoints', use_gpu=True, valid_interval=1) [INFO: config_utils.py: 70]: ---------------- Train Arguments ---------------- [INFO: config_utils.py: 72]: MODEL: [INFO: config_utils.py: 74]: name:BMN [INFO: config_utils.py: 74]: tscale:100 [INFO: config_utils.py: 74]: dscale:100 [INFO: config_utils.py: 74]: feat_dim:400 [INFO: config_utils.py: 74]: prop_boundary_ratio:0.5 [INFO: config_utils.py: 74]: num_sample:32 [INFO: config_utils.py: 74]: num_sample_perbin:3 [INFO: config_utils.py: 74]: anno_file:train.json [INFO: config_utils.py: 74]: feat_path:work/train/videofeature [INFO: config_utils.py: 72]: TRAIN: [INFO: config_utils.py: 74]: subset:train [INFO: config_utils.py: 74]: epoch:9 [INFO: config_utils.py: 74]: batch_size:16 [INFO: config_utils.py: 74]: num_threads:8 [INFO: config_utils.py: 74]: use_gpu:True [INFO: config_utils.py: 74]: num_gpus:4 [INFO: config_utils.py: 74]: learning_rate:0.001 [INFO: config_utils.py: 74]: learning_rate_decay:0.1 [INFO: config_utils.py: 74]: lr_decay_iter:4200 [INFO: config_utils.py: 74]: l2_weight_decay:0.0001 [INFO: config_utils.py: 72]: VALID: [INFO: config_utils.py: 74]: subset:validation [INFO: config_utils.py: 74]: batch_size:16 [INFO: config_utils.py: 74]: num_threads:8 [INFO: config_utils.py: 74]: use_gpu:True [INFO: config_utils.py: 74]: num_gpus:4 [INFO: config_utils.py: 72]: TEST: [INFO: config_utils.py: 74]: subset:validation [INFO: config_utils.py: 74]: batch_size:1 [INFO: config_utils.py: 74]: num_threads:1 [INFO: config_utils.py: 74]: snms_alpha:0.001 [INFO: config_utils.py: 74]: snms_t1:0.5 [INFO: config_utils.py: 74]: snms_t2:0.9 [INFO: config_utils.py: 74]: output_path:data/output/EVAL/BMN_results [INFO: config_utils.py: 74]: result_path:data/evaluate_results [INFO: config_utils.py: 72]: INFER: [INFO: config_utils.py: 74]: subset:test [INFO: config_utils.py: 74]: batch_size:1 [INFO: config_utils.py: 74]: num_threads:1 [INFO: config_utils.py: 74]: snms_alpha:0.4 [INFO: config_utils.py: 74]: snms_t1:0.5 [INFO: config_utils.py: 74]: snms_t2:0.9 [INFO: config_utils.py: 74]: filelist:data/dataset/bmn/infer.list [INFO: config_utils.py: 74]: output_path:data/output/INFER/BMN_results [INFO: config_utils.py: 74]: result_path:data/predict_results [INFO: config_utils.py: 75]: ------------------------------------------------- W0629 21:27:22.091398 18762 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 9.0 W0629 21:27:22.095041 18762 device_context.cc:260] device: 0, cuDNN Version: 7.3. train subset video numbers: 200 validation subset video numbers: 0 [INFO: bmn_proposal_metrics.py: 62]: Resetting train metrics... [INFO: bmn_proposal_metrics.py: 62]: Resetting valid metrics... [INFO: train_utils.py: 45]: ------- learning rate [0.], learning rate counter [-1] ----- [WARNING: reader.py: 1155]: Your reader has raised an exception! Exception in thread Thread-1: Traceback (most recent call last): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1156, in thread_main six.reraise(*sys.exc_info()) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/six.py", line 703, in reraise raise value File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1136, in thread_main for tensors in self._tensor_reader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1206, in tensor_reader_impl for slots in paddle_reader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 506, in reader_creator yield self.feed(item) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 347, in feed ret_dict[each_name] = each_converter.done() File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 156, in done arr = np.array(self.data, dtype=self.dtype) ValueError: could not broadcast input array from shape (1024,379) into shape (1024)
Traceback (most recent call last): File "models-1.6/PaddleCV/PaddleVideo/train.py", line 254, in train(args) File "models-1.6/PaddleCV/PaddleVideo/train.py", line 241, in train test_metrics=valid_metrics) File "/home/aistudio/models-1.6/PaddleCV/PaddleVideo/utils/train_utils.py", line 90, in train_with_dataloader for data in train_dataloader(): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1102, in next return self._reader.read_next() paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::operators::reader::BlockingQueue<std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor > >::Receive(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 3 paddle::operators::reader::PyReader::ReadNext(std::vector<paddle::framework::LoDTensor, std::allocatorpaddle::framework::LoDTensor >) 4 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result, std::__future_base::_Result_base::_Deleter>, unsigned long> >::_M_invoke(std::_Any_data const&) 5 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&) 6 ThreadPool::ThreadPool(unsigned long)::{lambda()#1 (closed)}::operator()() const
Error Message Summary:
Error: Blocking queue is killed because the data reader raises an exception [Hint: Expected killed_ != true, but received killed_:1 == true:1.] at (/paddle/paddle/fluid/operators/reader/blocking_queue.h:141)