- 19 1月, 2021 1 次提交
-
-
由 cc 提交于
-
- 15 9月, 2020 1 次提交
-
-
由 cc 提交于
* Remove the cache in post_traning_quantization, test=develop
-
- 19 8月, 2020 1 次提交
-
-
由 cc 提交于
* Conv2d_transpose and mul support channnelwise quantization, test=develop * Skip collecting out threshold for output tensor of which the type is not fp32 or fp64, test=develop * Fix error in test_user_defined_quantization, test=develop * Add depthwise_conv_bn_fuse, test=develop * Add conv_transpose_bn_fuse_pass for post_training_quant, test=develop
-
- 07 7月, 2020 2 次提交
- 06 7月, 2020 1 次提交
-
-
由 cc 提交于
* Save output threshold by argname_index, test=develop
-
- 02 6月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantization supports optimize model by fusing, test=develop
-
- 17 4月, 2020 1 次提交
-
-
由 cc 提交于
* Weight quantization support channel_wise_abs_max method to achieve higher accuracy
-
- 07 4月, 2020 1 次提交
-
-
由 cc 提交于
* Collect output scale for quantized op and fused op * Post_training_quantizaion sets batch_generator to support lod tensor
-
- 24 3月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantizaion supports min_max methon
-
- 10 2月, 2020 1 次提交
-
-
由 cc 提交于
* post_training_quantization support set bits, test=develop * up, test=develop
-
- 07 2月, 2020 1 次提交
-
-
由 cc 提交于
* support weight quantization in post_training_quanzitaion, test=develop * add test for weight quantization, test=develop
-
- 15 1月, 2020 1 次提交
-
-
由 juncaipeng 提交于
* add mul and matmul quantization, test=develop * add test for matmul, test=develop
-
- 19 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* fix post training quantization bug of memory constrained, support the input be different, test=develop
-
- 25 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 20 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* support set model_filename and params_filename in post_training_quantization, test=develop
-
- 16 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Support more ops in post training quantization, and save the output scale in quantized op. * Update docs in post training quantization and qat
-
- 05 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* add post training quantization, test=develop * specify the quantizable op type, test=develop
-