提交 e904a37c 编写于 作者: S slf12

update quant_post api doc

上级 cb99e549
...@@ -214,15 +214,14 @@ def quant_post(executor, ...@@ -214,15 +214,14 @@ def quant_post(executor,
fluid.io.save_inference_model. fluid.io.save_inference_model.
sample_generator(Python Generator): The sample generator provides sample_generator(Python Generator): The sample generator provides
calibrate data for DataLoader, and it only returns a sample every time. calibrate data for DataLoader, and it only returns a sample every time.
model_filename(str, optional): The name of model file to load the inference model_filename(str, optional): The name of model file. If parameters
program. If parameters are saved in separate files, are saved in separate files, set it as 'None'. Default is 'None'.
set it as 'None'. Default is 'None'. params_filename(str, optional): The name of params file.
params_filename(str, optional): The name of params file to load all parameters.
When all parameters are saved in a single file, set it When all parameters are saved in a single file, set it
as filename. If parameters are saved in separate files, as filename. If parameters are saved in separate files,
set it as 'None'. Default is 'None'. set it as 'None'. Default is 'None'.
batch_size(int, optional): The batch size of DataLoader, default is 16. batch_size(int, optional): The batch size of DataLoader, default is 16.
batch_nums(int, optional): If set batch_nums, the number of calibrate batch_nums(int, optional): If batch_nums is not None, the number of calibrate
data is 'batch_size*batch_nums'. If batch_nums is None, use all data data is 'batch_size*batch_nums'. If batch_nums is None, use all data
generated by sample_generator as calibrate data. generated by sample_generator as calibrate data.
scope(fluid.Scope, optional): The scope to run program, use it to load scope(fluid.Scope, optional): The scope to run program, use it to load
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册