- 16 6月, 2022 1 次提交
-
-
由 Guanghua Yu 提交于
* Add progress bar and speed up Quantization Pass * fix typo
-
- 09 6月, 2022 1 次提交
-
-
由 Guanghua Yu 提交于
* support fuse conv and bn in QAT (#42255) * support skip_op_list in PostTrainingQuantization (#42378) * fix unittest
-
- 05 4月, 2022 1 次提交
-
-
由 Guanghua Yu 提交于
-
- 28 3月, 2022 1 次提交
-
-
由 Guanghua Yu 提交于
* add adaround post-quant method
-
- 15 3月, 2022 1 次提交
-
-
由 Guanghua Yu 提交于
* add some op for full_quantization
-
- 11 3月, 2022 1 次提交
-
-
由 Guanghua Yu 提交于
-
- 05 1月, 2022 1 次提交
-
-
由 Jiaqi Liu 提交于
* make post training quant API support dataloader
-
- 13 12月, 2021 1 次提交
-
-
由 xiongkun 提交于
* fix single card 8 unittests in new executor * fix * fix
-
- 10 12月, 2021 1 次提交
-
-
由 Guanghua Yu 提交于
* Support sub graph quant-post
-
- 10 9月, 2021 1 次提交
-
-
由 whs 提交于
-
- 10 8月, 2021 1 次提交
-
-
由 XGZhang 提交于
-
- 05 7月, 2021 1 次提交
-
-
由 cc 提交于
* refine ptq according comments * reuse the module to calculate kl threshold
-
- 14 4月, 2021 1 次提交
-
-
由 XGZhang 提交于
-
- 08 4月, 2021 1 次提交
-
-
由 cc 提交于
* Support converting the model from fp32 to fp16
-
- 25 1月, 2021 1 次提交
-
-
由 yingshengBD 提交于
post quantize support insert fake_quantize_dequantize node before the OPs that will be used in VIS's faceid models (#30659) test=develop
-
- 18 1月, 2021 1 次提交
-
-
由 cc 提交于
* Collect weight threshold of lstm, test=develop
-
- 15 9月, 2020 1 次提交
-
-
由 cc 提交于
* Remove the cache in post_traning_quantization, test=develop
-
- 19 8月, 2020 1 次提交
-
-
由 cc 提交于
* Conv2d_transpose and mul support channnelwise quantization, test=develop * Skip collecting out threshold for output tensor of which the type is not fp32 or fp64, test=develop * Fix error in test_user_defined_quantization, test=develop * Add depthwise_conv_bn_fuse, test=develop * Add conv_transpose_bn_fuse_pass for post_training_quant, test=develop
-
- 07 7月, 2020 2 次提交
- 06 7月, 2020 1 次提交
-
-
由 cc 提交于
* Save output threshold by argname_index, test=develop
-
- 02 6月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantization supports optimize model by fusing, test=develop
-
- 17 4月, 2020 1 次提交
-
-
由 cc 提交于
* Weight quantization support channel_wise_abs_max method to achieve higher accuracy
-
- 07 4月, 2020 1 次提交
-
-
由 cc 提交于
* Collect output scale for quantized op and fused op * Post_training_quantizaion sets batch_generator to support lod tensor
-
- 24 3月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantizaion supports min_max methon
-
- 10 2月, 2020 1 次提交
-
-
由 cc 提交于
* post_training_quantization support set bits, test=develop * up, test=develop
-
- 07 2月, 2020 1 次提交
-
-
由 cc 提交于
* support weight quantization in post_training_quanzitaion, test=develop * add test for weight quantization, test=develop
-
- 15 1月, 2020 1 次提交
-
-
由 juncaipeng 提交于
* add mul and matmul quantization, test=develop * add test for matmul, test=develop
-
- 19 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* fix post training quantization bug of memory constrained, support the input be different, test=develop
-
- 25 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 20 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* support set model_filename and params_filename in post_training_quantization, test=develop
-
- 16 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Support more ops in post training quantization, and save the output scale in quantized op. * Update docs in post training quantization and qat
-
- 05 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* add post training quantization, test=develop * specify the quantizable op type, test=develop
-