- 02 6月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 27 5月, 2020 1 次提交
-
-
由 cc 提交于
-
- 26 5月, 2020 2 次提交
- 13 5月, 2020 1 次提交
-
-
由 cc 提交于
[Fix bug] Init scale node in OutScaleForTrainingPass and enable test_quantization_scale_pass UT (#24393) * Init scale node in OutScaleForTrainingPass, test=develop * Enable test_quantization_scale, test=develop
-
- 08 5月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* Enabled quantize all and skip missing in QAT
-
- 28 4月, 2020 1 次提交
-
-
由 Sylwester Fraczek 提交于
-
- 24 4月, 2020 1 次提交
-
-
由 arlesniak 提交于
-
- 23 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* QAT: support range-based quantization and scales from attribute * added support for channelwise
-
- 17 4月, 2020 1 次提交
-
-
由 cc 提交于
* Weight quantization support channel_wise_abs_max method to achieve higher accuracy
-
- 13 4月, 2020 1 次提交
-
-
由 joanna.wozna.intel 提交于
-
- 10 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 07 4月, 2020 1 次提交
-
-
由 cc 提交于
* Collect output scale for quantized op and fused op * Post_training_quantizaion sets batch_generator to support lod tensor
-
- 02 4月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 29 3月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 28 3月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
-
- 26 3月, 2020 1 次提交
-
-
由 cc 提交于
-
- 24 3月, 2020 1 次提交
-
-
由 cc 提交于
* Post_training_quantizaion supports min_max methon
-
- 23 2月, 2020 1 次提交
-
-
由 tianshuo78520a 提交于
-
- 12 2月, 2020 1 次提交
-
-
由 Wojciech Uss 提交于
* a test for Ernie QAT INT8 accuracy check test=develop * Remove NLP comparison test to split PRs test=develop * Fix typo and tabs, delete commented lines test=develop * re-combine the 2 PRs, test=develop Co-authored-by: NMichał Gallus <sand3r@interia.eu> Co-authored-by: Nbingyanghuang <33643817+bingyanghuang@users.noreply.github.com>
-
- 10 2月, 2020 1 次提交
-
-
由 cc 提交于
* post_training_quantization support set bits, test=develop * up, test=develop
-
- 07 2月, 2020 1 次提交
-
-
由 cc 提交于
* support weight quantization in post_training_quanzitaion, test=develop * add test for weight quantization, test=develop
-
- 15 1月, 2020 1 次提交
-
-
由 juncaipeng 提交于
* add mul and matmul quantization, test=develop * add test for matmul, test=develop
-
- 19 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* fix post training quantization bug of memory constrained, support the input be different, test=develop
-
- 02 12月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 28 11月, 2019 1 次提交
-
-
由 lidanqing 提交于
* add ut for comparing FP32 and QAT INT8 * add save qat transformed model python script test=develop * updated * added missing file * add "with_label" test=develop * performance benchmark as unit test test=develop * change names of unnecessary thing * Change CMakeList.txt for model downloading and UT test=develop * change names of functions and params for more readable code test=develop * Change PADDLE_ENFORCE messages test=develop * fix indent problems test=develop * indent problems test=develop
-
- 26 11月, 2019 1 次提交
-
-
由 itminner 提交于
-
- 25 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
-
- 20 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* support set model_filename and params_filename in post_training_quantization, test=develop
-
- 16 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Support more ops in post training quantization, and save the output scale in quantized op. * Update docs in post training quantization and qat
-
- 14 11月, 2019 1 次提交
-
-
由 joanna.wozna.intel 提交于
test=develop
-
- 06 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* Fix bug for inserting add_quant_dequant_op to same variable repetitively in add_quant_dequant_pass, test=develop
-
- 05 11月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* add post training quantization, test=develop * specify the quantizable op type, test=develop
-
- 16 10月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* move pool2d to add_quant_dequant_pass, test=develop
-
- 11 10月, 2019 1 次提交
-
-
由 Liufang Sang 提交于
* fix fuse_reduce_op quantization bug test=develop * close fuse_all_reduce_ops in PaddleSlim, test=develop
-
- 10 10月, 2019 1 次提交
-
-
由 Michał Gallus 提交于
* Leave fake quantization around mul * Replace Fake with Real Quantized Mul * Gather all scales from fake_quantize_ops * Enable uint8 in conv_relu tensors * Disable int8 mul and restore fake mul * Fix buf for running QAT on VGG16 and 19
-
- 28 9月, 2019 1 次提交
-
-
由 bingyanghuang 提交于
* Follow Wangzhen's comment in PR 18970, test=develop * Review comments, test=develop * Leave fake quantization around mul test=develop * Replace Fake with Real Quantized Mul test=develop * Fix bug in quantize placement pass Nodes in the graph now have checked type instead of node name when they are to be marked for quantization test=develop
-
- 25 9月, 2019 1 次提交
-
-
由 Wojciech Uss 提交于
* Add support for new QAT models test=develop Co-Authored-By: NMichał Gallus <michal.gallus@intel.com> Co-Authored-By: NWojciech Uss <wojciech.uss@intel.com> * fixed fps results test=develop * fix top5 accuracy drop problem * updated for new QAT models * skip quantizing average pooling - dirty but working * add missing pass * added missing conv+brelu fuse pass * removed a call to non-existent pass test=develop * renamed pass test=develop * Adjust finding pooling scale to newest QAT models * Remove unnecessary code from quantization_mkldnn_pass * Copy Pooling input scale to output scale in QAT * Refactor & remove unused code in QAT * Incorporate fp32 FC into QAT test=develop * Enable graph drawing with debug flag test=develop * Add tests for QATv2 * Fix paths for QATv2 models test=develop * Add option to save transformed int8 qat model test=develop * Remove redundant lines from qat mkldnn pass test=develop * Delegate disablement of avg pooling to qat test=develop * fix CI bug, test=develop * Follow Wangzhen's Review, test=develop * Update API.spec test=develop * Name False in (is_unsigned, TensorScale) tuple test=develop
-
- 24 9月, 2019 1 次提交
-
-
由 whs 提交于
1. Support customize eval function instead of eval program. 2. Fix loading checkpoint in quantization strategy. 3. Support saving eval model when saving a checkpoint. 4. Fix decoder of loading context in PaddleSlim. 5. Fix restoring from the checkpoint of uniform prune strategy. 6. Support saving eval model and infer model during training. 7. Add ‘unitest’ for saving eval model, saving infer model and uniform pruning restoring from the checkpoint. 8. Fix pruning of depthwise_conv_grad op by updating the groups.
-
- 23 9月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* add fake_quant_dequant_op for average pool2d * add test
-