convert quantized and well-trained ``program`` to final quantized ``program`` that can be used to save ``inference model``.
convert quantized and well-trained ``program`` to final quantized
``program``that can be used to save ``inference model``.
Args:
program(fluid.Program): quantized and well-trained ``test program``.
place(fluid.CPUPlace or fluid.CUDAPlace): This parameter represents the executor run on which device.
config(dict, optional): configs for convert. if set None, will use default config.
It must be same with config that used in 'quant_aware'. Default: None.
scope(fluid.Scope, optional): Scope records the mapping between variable names and variables,
similar to brackets in programming languages. Usually users can use
`fluid.global_scope <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/executor_cn/global_scope_cn.html>`_. When ``None`` will use `fluid.global_scope() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/executor_cn/global_scope_cn.html>`_ . Default: ``None``.
save_int8: Whether to return ``program`` which model parameters' dtype is ``int8``.
This parameter can only be used to get model size. Default: ``False``.
place(fluid.CPUPlace or fluid.CUDAPlace): This parameter represents
the executor run on which device.
config(dict, optional): configs for convert. if set None, will use
default config. It must be same with config that used in
'quant_aware'. Default is None.
scope(fluid.Scope, optional): Scope records the mapping between
variable names and variables, similar to brackets in