1. 02 6月, 2020 1 次提交
  2. 27 5月, 2020 1 次提交
  3. 26 5月, 2020 2 次提交
  4. 13 5月, 2020 1 次提交
  5. 08 5月, 2020 1 次提交
  6. 28 4月, 2020 1 次提交
  7. 24 4月, 2020 1 次提交
  8. 23 4月, 2020 1 次提交
  9. 17 4月, 2020 1 次提交
  10. 13 4月, 2020 1 次提交
  11. 10 4月, 2020 1 次提交
  12. 07 4月, 2020 1 次提交
  13. 02 4月, 2020 1 次提交
  14. 29 3月, 2020 1 次提交
  15. 28 3月, 2020 1 次提交
  16. 26 3月, 2020 1 次提交
  17. 24 3月, 2020 1 次提交
  18. 23 2月, 2020 1 次提交
  19. 12 2月, 2020 1 次提交
  20. 10 2月, 2020 1 次提交
  21. 07 2月, 2020 1 次提交
  22. 15 1月, 2020 1 次提交
  23. 19 12月, 2019 1 次提交
  24. 02 12月, 2019 1 次提交
  25. 28 11月, 2019 1 次提交
    • L
      Fp32 vs int8 qat C++ performance (#21244) · c0aa1367
      lidanqing 提交于
      * add ut for comparing FP32 and QAT INT8
      
      * add save qat transformed model python script
      test=develop
      
      * updated
      
      * added missing file
      
      * add "with_label"
      test=develop
      
      * performance benchmark as unit test
      test=develop
      
      * change names of unnecessary thing
      
      * Change CMakeList.txt for model downloading and UT
      test=develop
      
      * change names of functions and params for more readable code
      test=develop
      
      * Change PADDLE_ENFORCE messages
      test=develop
      
      * fix indent problems
      test=develop
      
      * indent problems
      test=develop
      c0aa1367
  26. 26 11月, 2019 1 次提交
  27. 25 11月, 2019 1 次提交
  28. 20 11月, 2019 1 次提交
  29. 16 11月, 2019 1 次提交
  30. 14 11月, 2019 1 次提交
  31. 06 11月, 2019 1 次提交
  32. 05 11月, 2019 1 次提交
  33. 16 10月, 2019 1 次提交
  34. 11 10月, 2019 1 次提交
  35. 10 10月, 2019 1 次提交
    • M
      [Bug-fix][1.6] Improve QAT accuracy (#20174) · 540935a8
      Michał Gallus 提交于
      * Leave fake quantization around mul
      
      * Replace Fake with Real Quantized Mul
      
      * Gather all scales from fake_quantize_ops
      
      * Enable uint8 in conv_relu tensors
      
      * Disable int8 mul and restore fake mul
      
      * Fix buf for running QAT on VGG16 and 19
      540935a8
  36. 28 9月, 2019 1 次提交
    • B
      Follow comment of Merged QAT PR 18970 (#19979) · 9de67725
      bingyanghuang 提交于
      * Follow Wangzhen's comment in PR 18970, test=develop
      
      * Review comments, test=develop
      
      * Leave fake quantization around mul
      
      test=develop
      
      * Replace Fake with Real Quantized Mul
      
      test=develop
      
      * Fix bug in quantize placement pass
      
      Nodes in the graph now have checked type instead of node name when they are to be marked for quantization test=develop
      9de67725
  37. 25 9月, 2019 1 次提交
    • W
      Add support for new QAT models (#18970) · 4286a627
      Wojciech Uss 提交于
      * Add support for new QAT models
      
      test=develop
      Co-Authored-By: NMichał Gallus <michal.gallus@intel.com>
      Co-Authored-By: NWojciech Uss <wojciech.uss@intel.com>
      
      * fixed fps results
      
      test=develop
      
      * fix top5 accuracy drop problem
      
      * updated for new QAT models
      
      * skip quantizing average pooling - dirty but working
      
      * add missing pass
      
      * added missing conv+brelu fuse pass
      
      * removed a call to non-existent pass
      
      test=develop
      
      * renamed pass
      
      test=develop
      
      * Adjust finding pooling scale to newest QAT models
      
      * Remove unnecessary code from quantization_mkldnn_pass
      
      * Copy Pooling input scale to output scale in QAT
      
      * Refactor & remove unused code in QAT
      
      * Incorporate fp32 FC into QAT
      
      test=develop
      
      * Enable graph drawing with debug flag
      
      test=develop
      
      * Add tests for QATv2
      
      * Fix paths for QATv2 models
      
      test=develop
      
      * Add option to save transformed int8 qat model
      
      test=develop
      
      * Remove redundant lines from qat mkldnn pass
      
      test=develop
      
      * Delegate disablement of avg pooling to qat
      
      test=develop
      
      * fix CI bug, test=develop
      
      * Follow Wangzhen's Review, test=develop
      
      * Update API.spec
      
      test=develop
      
      * Name False in (is_unsigned, TensorScale) tuple
      
      test=develop
      4286a627
  38. 24 9月, 2019 1 次提交
    • W
      [PaddleSlim] Enhence compressor api in PaddleSlim (#19894) · bdb3e376
      whs 提交于
      
      1. Support customize eval function instead of eval program.
      2. Fix loading checkpoint in quantization strategy.
      3. Support saving eval model when saving a checkpoint.
      4. Fix decoder of loading context in PaddleSlim.
      5. Fix restoring from the checkpoint of uniform prune strategy.
      6. Support saving eval model and infer model during training.
      7. Add ‘unitest’ for saving eval model, saving infer model and uniform pruning restoring from the checkpoint.
      8. Fix pruning of depthwise_conv_grad op by updating the groups.
      bdb3e376
  39. 23 9月, 2019 1 次提交