- 07 7月, 2020 2 次提交
- 06 7月, 2020 2 次提交
-
-
由 Wojciech Uss 提交于
-
由 cc 提交于
* Save output threshold by argname_index, test=develop
-
- 30 6月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
test=develop
-
- 29 6月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
test=develop
-
- 17 6月, 2020 1 次提交
-
-
由 cc 提交于
-
- 04 6月, 2020 1 次提交
-
-
由 Liufang Sang 提交于
* add user defined func test=develop * update test=develop * update test=develop * fix name conflicts test=develop * add unittest test=develop * change 2018 to 2020 test=develop * add comment test=develop * add comment for function test=develop * fix details test=develop * fix details test=develop
-
- 02 6月, 2020 2 次提交
-
-
由 cc 提交于
* Post_training_quantization supports optimize model by fusing, test=develop
-
由 Wojciech Uss 提交于
-
- 27 5月, 2020 1 次提交
-
-
由 cc 提交于
-
- 26 5月, 2020 2 次提交
- 13 5月, 2020 1 次提交
-
-
由 cc 提交于
[Fix bug] Init scale node in OutScaleForTrainingPass and enable test_quantization_scale_pass UT (#24393) * Init scale node in OutScaleForTrainingPass, test=develop * Enable test_quantization_scale, test=develop
-
- 08 5月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* Enabled quantize all and skip missing in QAT
-
- 28 4月, 2020 1 次提交
-
-
由 Sylwester Fraczek 提交于
-
- 24 4月, 2020 1 次提交
-
-
由 arlesniak 提交于
-
- 23 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* QAT: support range-based quantization and scales from attribute * added support for channelwise
-
- 17 4月, 2020 1 次提交
-
-
由 cc 提交于
* Weight quantization support channel_wise_abs_max method to achieve higher accuracy
-
- 13 4月, 2020 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 10 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 07 4月, 2020 1 次提交
-
-
由 cc 提交于
* Collect output scale for quantized op and fused op * Post_training_quantizaion sets batch_generator to support lod tensor
-
- 02 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 29 3月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 28 3月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 26 3月, 2020 1 次提交
-
-
由 cc 提交于
-
- 24 3月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantizaion supports min_max methon
-
- 23 2月, 2020 1 次提交
-
-
由 tianshuo78520a 提交于
-
- 12 2月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* a test for Ernie QAT INT8 accuracy check test=develop * Remove NLP comparison test to split PRs test=develop * Fix typo and tabs, delete commented lines test=develop * re-combine the 2 PRs, test=develop Co-authored-by: NMichał Gallus <sand3r@interia.eu> Co-authored-by: Nbingyanghuang <33643817+bingyanghuang@users.noreply.github.com>
-
- 10 2月, 2020 1 次提交
-
-
由 cc 提交于
* post_training_quantization support set bits, test=develop * up, test=develop
-
- 07 2月, 2020 1 次提交
-
-
由 cc 提交于
* support weight quantization in post_training_quanzitaion, test=develop * add test for weight quantization, test=develop
-
- 15 1月, 2020 1 次提交
-
-
由 juncaipeng 提交于
* add mul and matmul quantization, test=develop * add test for matmul, test=develop
-
- 19 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* fix post training quantization bug of memory constrained, support the input be different, test=develop
-
- 02 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 28 11月, 2019 1 次提交
-
-
由 lidanqing 提交于
* add ut for comparing FP32 and QAT INT8 * add save qat transformed model python script test=develop * updated * added missing file * add "with_label" test=develop * performance benchmark as unit test test=develop * change names of unnecessary thing * Change CMakeList.txt for model downloading and UT test=develop * change names of functions and params for more readable code test=develop * Change PADDLE_ENFORCE messages test=develop * fix indent problems test=develop * indent problems test=develop
-
- 26 11月, 2019 1 次提交
-
-
由 itminner 提交于
-
- 25 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 20 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* support set model_filename and params_filename in post_training_quantization, test=develop
-
- 16 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Support more ops in post training quantization, and save the output scale in quantized op. * Update docs in post training quantization and qat
-
- 14 11月, 2019 1 次提交
-
-
由 joanna.wozna.intel 提交于
test=develop
-