convert quantized and well-trained ``program`` to final quantized ``program`` that can be used to save ``inference model``.
convert quantized and well-trained ``program`` to final quantized
``program``that can be used to save ``inference model``.
Args:
Args:
program(fluid.Program): quantized and well-trained ``test program``.
program(fluid.Program): quantized and well-trained ``test program``.
place(fluid.CPUPlace or fluid.CUDAPlace): This parameter represents the executor run on which device.
place(fluid.CPUPlace or fluid.CUDAPlace): This parameter represents
config(dict, optional): configs for convert. if set None, will use default config.
the executor run on which device.
It must be same with config that used in 'quant_aware'. Default: None.
config(dict, optional): configs for convert. if set None, will use
scope(fluid.Scope, optional): Scope records the mapping between variable names and variables,
default config. It must be same with config that used in
similar to brackets in programming languages. Usually users can use
'quant_aware'. Default is None.
`fluid.global_scope <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/executor_cn/global_scope_cn.html>`_. When ``None`` will use `fluid.global_scope() <https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api_cn/executor_cn/global_scope_cn.html>`_ . Default: ``None``.
scope(fluid.Scope, optional): Scope records the mapping between
save_int8: Whether to return ``program`` which model parameters' dtype is ``int8``.
variable names and variables, similar to brackets in
This parameter can only be used to get model size. Default: ``False``.