未验证 提交 18411ec3 编写于 作者: W whs 提交者: GitHub

Fix the code in quantization doc to save inference model without accuracy op (#412)

上级 47778f89
......@@ -126,14 +126,14 @@ test(val_quant_program)
```python
float_prog, int8_prog = slim.quant.convert(val_quant_program, exe.place, save_int8=True)
target_vars = [float_prog.global_block().var(name) for name in outputs]
target_vars = [float_prog.global_block().var(outputs[-1])]
fluid.io.save_inference_model(dirname='./inference_model/float',
feeded_var_names=[var.name for var in inputs],
feeded_var_names=[inputs[0].name],
target_vars=target_vars,
executor=exe,
main_program=float_prog)
fluid.io.save_inference_model(dirname='./inference_model/int8',
feeded_var_names=[var.name for var in inputs],
feeded_var_names=[inputs[0].name],
target_vars=target_vars,
executor=exe,
main_program=int8_prog)
......
......@@ -29,4 +29,4 @@ def image_classification(model, image_shape, class_num, use_gpu=False):
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
return exe, train_program, val_program, (image, label), (
acc_top1.name, acc_top5.name, avg_cost.name)
acc_top1.name, acc_top5.name, avg_cost.name, out.name)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册