There is a bug in save_inference_model and prune when the input program is initialized by load_inference_model
Created by: Xreki
In some cases, users may need to use fluid.io.load_inference_model
to get an inference_program
, do some transformation to the inference_program
, then use fluid.io.save_inference_model
to save the transformed inference_program
. However, the fluid.io.save_inference_model
fails at the following code:
The reason is: t
is a Variable
and t.op
is None
, the use of t.block.idx
will fail. We should handle the case when t.op
is None
.
In addition, because the input porgram is an inference program, there are operators which are marked as target, and there are feed_op
s and fetch_op
s. The following codes will insert feed_op
s and fetch_op
s to the program, which makes there are redundant feed_op
s and fetch_op
s in the resulted program. So we need to remove the original feed_op
s and fetch_op
s in the input program first.