1. 11 12月, 2019 1 次提交
  2. 09 12月, 2019 1 次提交
    • L
      QAT Int8 document (#21360) · fbf9eca0
      lidanqing 提交于
      * update benchmark for int8v2, QAT1, QAT2 accuracy and performance
      test=document_fix
      
      * change according to reviews
      test=develop test=document_fix
      
      * improve some descriptions and some models
      test=develop test=document_fix
      
      * update models benchmark data
      test=develop test=document_fix
      
      * update int8v2 and qat2 performance
      test=develop test=document_fix
      fbf9eca0
  3. 02 12月, 2019 1 次提交
  4. 28 11月, 2019 2 次提交
    • L
      Fp32 vs int8 qat C++ performance (#21244) · c0aa1367
      lidanqing 提交于
      * add ut for comparing FP32 and QAT INT8
      
      * add save qat transformed model python script
      test=develop
      
      * updated
      
      * added missing file
      
      * add "with_label"
      test=develop
      
      * performance benchmark as unit test
      test=develop
      
      * change names of unnecessary thing
      
      * Change CMakeList.txt for model downloading and UT
      test=develop
      
      * change names of functions and params for more readable code
      test=develop
      
      * Change PADDLE_ENFORCE messages
      test=develop
      
      * fix indent problems
      test=develop
      
      * indent problems
      test=develop
      c0aa1367
    • L
      1840c165
  5. 26 11月, 2019 1 次提交
  6. 25 11月, 2019 2 次提交
  7. 20 11月, 2019 2 次提交
  8. 16 11月, 2019 1 次提交
  9. 14 11月, 2019 1 次提交
  10. 12 11月, 2019 1 次提交
  11. 08 11月, 2019 1 次提交
  12. 06 11月, 2019 1 次提交
  13. 05 11月, 2019 1 次提交
  14. 28 10月, 2019 1 次提交
  15. 25 10月, 2019 1 次提交
  16. 20 10月, 2019 1 次提交
  17. 16 10月, 2019 2 次提交
  18. 15 10月, 2019 1 次提交
  19. 11 10月, 2019 2 次提交
  20. 10 10月, 2019 1 次提交
    • M
      [Bug-fix][1.6] Improve QAT accuracy (#20174) · 540935a8
      Michał Gallus 提交于
      * Leave fake quantization around mul
      
      * Replace Fake with Real Quantized Mul
      
      * Gather all scales from fake_quantize_ops
      
      * Enable uint8 in conv_relu tensors
      
      * Disable int8 mul and restore fake mul
      
      * Fix buf for running QAT on VGG16 and 19
      540935a8
  21. 30 9月, 2019 1 次提交
  22. 28 9月, 2019 2 次提交
  23. 26 9月, 2019 1 次提交
  24. 25 9月, 2019 1 次提交
    • W
      Add support for new QAT models (#18970) · 4286a627
      Wojciech Uss 提交于
      * Add support for new QAT models
      
      test=develop
      Co-Authored-By: NMichał Gallus <michal.gallus@intel.com>
      Co-Authored-By: NWojciech Uss <wojciech.uss@intel.com>
      
      * fixed fps results
      
      test=develop
      
      * fix top5 accuracy drop problem
      
      * updated for new QAT models
      
      * skip quantizing average pooling - dirty but working
      
      * add missing pass
      
      * added missing conv+brelu fuse pass
      
      * removed a call to non-existent pass
      
      test=develop
      
      * renamed pass
      
      test=develop
      
      * Adjust finding pooling scale to newest QAT models
      
      * Remove unnecessary code from quantization_mkldnn_pass
      
      * Copy Pooling input scale to output scale in QAT
      
      * Refactor & remove unused code in QAT
      
      * Incorporate fp32 FC into QAT
      
      test=develop
      
      * Enable graph drawing with debug flag
      
      test=develop
      
      * Add tests for QATv2
      
      * Fix paths for QATv2 models
      
      test=develop
      
      * Add option to save transformed int8 qat model
      
      test=develop
      
      * Remove redundant lines from qat mkldnn pass
      
      test=develop
      
      * Delegate disablement of avg pooling to qat
      
      test=develop
      
      * fix CI bug, test=develop
      
      * Follow Wangzhen's Review, test=develop
      
      * Update API.spec
      
      test=develop
      
      * Name False in (is_unsigned, TensorScale) tuple
      
      test=develop
      4286a627
  25. 24 9月, 2019 2 次提交
    • J
      add optimizer:dpsgd,test=develop (#19915) · 766bd529
      jhjiangcs 提交于
      766bd529
    • W
      [PaddleSlim] Enhence compressor api in PaddleSlim (#19894) · bdb3e376
      whs 提交于
      
      1. Support customize eval function instead of eval program.
      2. Fix loading checkpoint in quantization strategy.
      3. Support saving eval model when saving a checkpoint.
      4. Fix decoder of loading context in PaddleSlim.
      5. Fix restoring from the checkpoint of uniform prune strategy.
      6. Support saving eval model and infer model during training.
      7. Add ‘unitest’ for saving eval model, saving infer model and uniform pruning restoring from the checkpoint.
      8. Fix pruning of depthwise_conv_grad op by updating the groups.
      bdb3e376
  26. 23 9月, 2019 1 次提交
  27. 18 9月, 2019 1 次提交
  28. 11 9月, 2019 1 次提交
  29. 06 9月, 2019 1 次提交
  30. 03 9月, 2019 1 次提交
  31. 29 8月, 2019 1 次提交
  32. 26 8月, 2019 2 次提交