Change the type of attribute "use_quantizer"
Created by: wozna
Implementation of bfloat16 inference on OneDNN has started and Bfloat16 data type is already under review. https://github.com/PaddlePaddle/Paddle/pull/25402 We have a question connected to the implementation of bfloat16 API. "use_quantizer" attribute is used in INT8 implementation. For now, it is a boolean type that informs if the operator should be quantized and use int8 kernel.
First solution
Merging int8 attribute with bfloat16 Our solution will change the datatype of "use_quantizer" attribute for string or int. It will inform if the operator should be run on bfloat16 or int8 or nothing from that. There will be an option to add other types in the future. Another question is if can we change the name of this attribute to something more general and adequate? eg. "use_optimizer", "onednn_type" "optimizer_type" ...
Second solution
We can hold old attribute "use_quantizer" and add "use_bfloat16". It's not so general solution, but can be easily added right now without any refactoring.