- 23 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* QAT: support range-based quantization and scales from attribute * added support for channelwise
-
- 17 4月, 2020 1 次提交
-
-
由 cc 提交于
* Weight quantization support channel_wise_abs_max method to achieve higher accuracy
-
- 11 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* Update for QAT INT8 MKL-DNN document, added info on VNNI in Windows, benchmark results added and updated
-
- 10 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 07 4月, 2020 2 次提交
- 03 4月, 2020 3 次提交
- 02 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 28 3月, 2020 1 次提交
-
-
由 lidanqing 提交于
-
- 24 3月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantizaion supports min_max methon
-
- 11 3月, 2020 1 次提交
-
-
由 lidanqing 提交于
-
- 05 3月, 2020 1 次提交
-
-
由 cc 提交于
* Added an option to use external FP32 model in QAT comparison test
-
- 04 3月, 2020 1 次提交
-
-
由 Sylwester Fraczek 提交于
-
- 20 2月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 19 2月, 2020 1 次提交
-
-
由 bingyanghuang 提交于
-
- 18 2月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* Doc update with Ernie QAT INT8 benchmarking test=develop * fixes after review test=develop * remove ernie part, test=develop test=document_fix * Fix model name for qatv2 test=develop test=document_fix * Add Ernie data test=develop test=document_fix * update ERNIE benchmark with baidu QA results, test=develop test=document_fix Co-authored-by: Nbingyanghuang <33643817+bingyanghuang@users.noreply.github.com> Co-authored-by: NMichał Gallus <sand3r@interia.eu>
-
- 12 2月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* a test for Ernie QAT INT8 accuracy check test=develop * Remove NLP comparison test to split PRs test=develop * Fix typo and tabs, delete commented lines test=develop * re-combine the 2 PRs, test=develop Co-authored-by: NMichał Gallus <sand3r@interia.eu> Co-authored-by: Nbingyanghuang <33643817+bingyanghuang@users.noreply.github.com>
-
- 10 2月, 2020 1 次提交
-
-
由 cc 提交于
* post_training_quantization support set bits, test=develop * up, test=develop
-
- 07 2月, 2020 1 次提交
-
-
由 cc 提交于
* support weight quantization in post_training_quanzitaion, test=develop * add test for weight quantization, test=develop
-
- 25 1月, 2020 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 16 1月, 2020 1 次提交
-
-
由 juncaipeng 提交于
-
- 15 1月, 2020 1 次提交
-
-
由 juncaipeng 提交于
* add mul and matmul quantization, test=develop * add test for matmul, test=develop
-
- 24 12月, 2019 1 次提交
-
-
由 lidanqing 提交于
test=develop
-
- 19 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* fix post training quantization bug of memory constrained, support the input be different, test=develop
-
- 12 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 11 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* fix ci bug for deleting data files, test=develop * update, test=develop
-
- 09 12月, 2019 1 次提交
-
-
由 lidanqing 提交于
* update benchmark for int8v2, QAT1, QAT2 accuracy and performance test=document_fix * change according to reviews test=develop test=document_fix * improve some descriptions and some models test=develop test=document_fix * update models benchmark data test=develop test=document_fix * update int8v2 and qat2 performance test=develop test=document_fix
-
- 28 11月, 2019 2 次提交
-
-
由 lidanqing 提交于
* add ut for comparing FP32 and QAT INT8 * add save qat transformed model python script test=develop * updated * added missing file * add "with_label" test=develop * performance benchmark as unit test test=develop * change names of unnecessary thing * Change CMakeList.txt for model downloading and UT test=develop * change names of functions and params for more readable code test=develop * Change PADDLE_ENFORCE messages test=develop * fix indent problems test=develop * indent problems test=develop
-
由 Liufang Sang 提交于
-
- 26 11月, 2019 1 次提交
-
-
由 itminner 提交于
-
- 25 11月, 2019 2 次提交
-
-
由 juncaipeng 提交于
-
由 Zeng Jinle 提交于
* add global value getter setter, test=develop * fix error messages, test=develop
-
- 20 11月, 2019 2 次提交
-
-
由 juncaipeng 提交于
* support set model_filename and params_filename in post_training_quantization, test=develop
-
由 Liufang Sang 提交于
-
- 16 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Support more ops in post training quantization, and save the output scale in quantized op. * Update docs in post training quantization and qat
-
- 08 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 06 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Fix bug for inserting add_quant_dequant_op to same variable repetitively in add_quant_dequant_pass, test=develop
-
- 05 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* add post training quantization, test=develop * specify the quantizable op type, test=develop
-