PaddleSlim剪切报错 pruner.py 维度上的index报错,未解决
Created by: universea
data_dir work/coco/train2017 use_multiprocess False 2019-08-01 23:19:57,543-INFO: checkpoints: [] 2019-08-01 23:19:57,543-INFO: checkpoints: [] 2019-08-01 23:19:57,544-INFO: _get_best_ratios 2019-08-01 23:19:57,544-INFO: _get_best_ratios Traceback (most recent call last): File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/core/compressor.py", line 534, in run strategy.on_epoch_begin(context) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/prune/prune_strategy.py", line 640, in on_epoch_begin params, ratios = self._get_best_ratios(context) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/prune/prune_strategy.py", line 618, in _get_best_ratios param_shape_backup=param_shape_backup) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/prune/prune_strategy.py", line 477, in _prune_parameters param_shape_backup=param_shape_backup) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/prune/prune_strategy.py", line 322, in _forward_pruning_ralated_params param_shape_backup=param_shape_backup) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/prune/prune_strategy.py", line 159, in _prune_parameter_by_idx np.array(param_t), pruned_idx, pruned_axis, lazy=lazy) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/contrib/slim/prune/pruner.py", line 95, in prune_tensor mask[pruned_idx] = True IndexError: index 257 is out of bounds for axis 0 with size 256 2019-08-01 23:19:58,209-ERROR: None 2019-08-01 23:19:58,209-ERROR: None 2019-08-01 23:19:58,228-WARNING: You can try our memory optimize feature to save your memory usage: # create a build_strategy variable to set memory optimize option build_strategy = compiler.BuildStrategy() build_strategy.enable_inplace = True build_strategy.memory_optimize = True
# pass the build_strategy to with_data_parallel API
compiled_prog = compiler.CompiledProgram(main).with_data_parallel(
loss_name=loss.name, build_strategy=build_strategy)
!!! Memory optimize is our experimental feature !!!
some variables may be removed/reused internal to save memory usage,
in order to fetch the right value of the fetch_list, please set the
persistable property to true for each variable in fetch_list
# Sample
conv1 = fluid.layers.conv2d(data, 4, 5, 1, act=None)
# if you need to fetch conv1, then:
conv1.persistable = True
I0801 23:20:01.227057 1035 parallel_executor.cc:329] The number of CUDAPlace, which is used in ParallelExecutor, is 1. And the Program will be copied 1 copies I0801 23:20:01.267546 1035 build_strategy.cc:340] SeqOnlyAllReduceOps:0, num_trainers:1