Can not save inference model in multi-GPU mode
Created by: kuke
Error complaint:
----------- Configuration Arguments -----------
batch_size: 32
checkpoints: ./checkpoints
device: GPU
hidden_dim: 1024
infer_models: ./infer_models
init_model_path: None
learning_rate: 0.002
mean_var: data/global_mean_var_search26kHr
minimum_batch_size: 1
parallel: True
pass_num: 100
print_per_batches: 100
proj_dim: 512
stacked_num: 5
train_feature_lst: data/feature.lst
train_label_lst: data/label.lst
val_feature_lst: data/val_feature.lst
val_label_lst: data/val_label.lst
------------------------------------------------
..................................................................Traceback (most recent call last):
File "train.py", line 265, in <module>
train(args)
File "train.py", line 252, in train
[prediction], exe)
File "/usr/local/lib/python2.7/dist-packages/paddle/v2/fluid/io.py", line 342, in save_inference_model
prepend_feed_ops(inference_program, feeded_var_names)
File "/usr/local/lib/python2.7/dist-packages/paddle/v2/fluid/io.py", line 272, in prepend_feed_ops
out = global_block.var(name)
File "/usr/local/lib/python2.7/dist-packages/paddle/v2/fluid/framework.py", line 708, in var
raise ValueError("var %s not in this block" % name)
ValueError: var feature not in this block