1. 06 6月, 2019 2 次提交
    • Z
      ae576f3c
    • INT8 MKL-DNN v2 integrate to slim (#17634) · 993c703b
      翟飞跃 提交于
      * refactor PR 16865
      
      * delete mergetool files
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * test=develop
      
      * create dir for int8 model before call SaveOptimModel
      
      * test=develop
      
      * mkldnn int8 only support linux; test=develop
      
      * refine code; test=develop
      
      * remove comment; test=develop
      
      * refine code; test=develop
      
      * fix bug; test=develop
      
      * add exception for mkldnn_post_training_strategy
      
      * reuse int8v2 CAPI dataset; test=develop
      
      * fix accuracy check bug; test=develop
      
      * remove tab
      
      * convert files to unix format
      
      * test=develop
      
      * reduce CI time;test=develop
      
      * reduce CI time and refine code;test=develop
      
      * refine comment; test=develop
      
      * add cmake FLAGS;test=develop
      
      * remove predict_num;test=develop
      993c703b
  2. 03 6月, 2019 1 次提交
  3. 29 5月, 2019 1 次提交
  4. 28 5月, 2019 2 次提交
    • L
      Improve mobilenetv2 INT8 performance by using INT8 relu as post-op (#17570) · 04b6c29e
      lidanqing 提交于
      * add INT8 conv+relu6 fuse and enbale mobilentv2 INT8 test
      test=develop
      
      * change fasle and 0.0 to fuse_brelu and brelu_threshold
      test=develop
      
      change the "fuse_relu||fuse_brelu" to "unsigned_output"
      test=develop
      
      * Use relu instead of brelu as INT8 post-op because INT8 brelu is not enabled in mkldnn v0.18
      test=develop
      
      * continuous-integration fix
      test=develop
      04b6c29e
    • J
      [MKL-DNN] conv_transpose mkldnn bias pass (#17644) · 6d8075ec
      Jacek Czaja 提交于
      * - changes to graph detector
      
      - Changes to pass
      
      - Added ut for new pass
      
      - use_pass
      
      - Added pass to mkldnn passes
      
      - fix to registration
      
      - improved verbose messaging for conv bias passes
      
      - Lint fixes
      
      test=develop
      
      * - Lint fixes
      
      test=develop
      6d8075ec
  5. 27 5月, 2019 3 次提交
    • S
      add Concat quantization (#17448) · 96845d21
      Sylwester Fraczek 提交于
      * add Concat quantization
      add unit test for quantizing concat
      fix for wrong value when the input is not in map of calculated scales
      add use_quantizer to concat_op.cc
      add scale_algo rules for concat
      
      test=develop
      
      * missing fix for multiple inputs quantize-squash
      
      * wojtuss review fix: adding comment
      
      test=develop
      96845d21
    • Z
      Fix the bug in the AnalysisPredictor and add more directions about io APIs. (#17639) · 8bd651b7
      Zhen Wang 提交于
      * fix the bug that sub_scope_ may be null in AnalysisPredictor::Run.
      
      * add more directions about io APIs' docs.
      
      * update the API.spec. test=develop test=document_preview
      8bd651b7
    • Z
      Code clean of Allocator (#17602) · 4aa931dd
      Zeng Jinle 提交于
      * Revert "Revert "Fix allocator bug""
      
      This reverts commit 174d0d0b.
      
      * Revert "fix travis ci"
      
      This reverts commit 5656fa9f.
      
      test=develop
      
      * add inlined_vector.h, test=develop
      
      * add inlined_vector_test,test=develop
      
      * clean code of allocator,test=develop
      
      * delete zero_size_allocator.h,test=develop
      
      * fix failed unittest,test=develop
      4aa931dd
  6. 25 5月, 2019 1 次提交
    • Z
      TRT: Support set dynamic range in int8 mode. (#17524) · 61221ebc
      Zhaolong Xing 提交于
      * fluid int8 train and trt int8 predict align.
      trt int8 predict init
      op converter
      
      * 2. align fluid int8 train and trt int8 inference.
      enhance quant dequant fuse pass
      enhance op converter, trt engine, trt engine op, trt subgraph pass.
      
      * 3. add delete_quant_dequant_pass for trt
      
      test=develop
      
      * 4. add the missing file
      test=develop
      
      * 5. i modify the c++ interface, but forget to modify the pybind code
      fix the IS_TRT_VERSION_GE bug, and fix elementwise op converter
      test=develop
      61221ebc
  7. 24 5月, 2019 2 次提交
    • M
      [MKL-DNN] Add Fully Connected Op for inference only(#15226) · 0c39b97b
      Michał Gallus 提交于
      * fuse mul and elementwise add to fc
      
      * Reimplement the FC forward operator
      
      * Fix FC MKLDNN integration by transposing weights
      
      * Add FC MKLDNN Pass
      
      test=develop
      
      * FC MKLDNN Pass: change memcpy to std::copy
      
      * Fix MKLDNN FC handling of mismatch input and weights dims
      
      * Lower tolerance for MKL-DNN in resnet50 test
      
      test=develop
      
      * Adjust FC to support MKLDNN Op placement
      
      test=develop
      
      * Adjust Placement Op to set use_mkldnn attribute for graph
      
      test=develop
      
      * MKLDNN FC: fix weights format so that gemm version is called
      
      test=develop
      
      * FC MKLDNN: Remove tolerance decrease from tester_helper
      
      * FC MKL-DNN: Refactor the code, change input reorder to weight reorder
      
      * MKL-DNN FC: Introduce operator caching
      
      test=develop
      
      * FC MKL-DNN: Fix the tensor type in ExpectedKernelType
      
      test=develop
      
      * FC MKL-DNN: fix style changes
      
      test=develop
      
      * FC MKL-DNN: fallback to native on non-supported dim sizes
      
      test=develop
      
      * FC MKLDNN: fix CMake paths
      
      test=develop
      
      * FC MKLDNN: Refine placement pass graph mkldnn attribute
      
      test=develop
      
      * Fix Transpiler error for fuse_conv_eltwise
      
      test=develop
      
      * Fix missing STL includes in files
      
      test=develop
      
      * FC MKL-DNN: Enable new output size computation
      
      Also, refine pass to comply with newest interface.
      test=develop
      
      * FC MKL-DNN: enable only when fc_mkldnn_pass is enabled
      
      * FC MKL-DNN: Allow Weights to use oi or io format
      
      * FC MKL-DNN: Adjust UT to work with correct dims
      
      test=develop
      
      * Enable MKL DEBUG for resnet50 analyzer
      
      test=develop
      
      * FC MKL-DNN: Improve Hashing function
      
      test=develop
      
      * FC MKL-DNN: Fix shape for fc weights in transpiler
      
      * FC MKL-DNN: Update input pointer in re-used fc primitive
      
      * Add log for not handling fc fuse for unsupported dims
      
      test=develop
      
      * FC MKL-DNN: Move transpose from pass to Op Kernel
      
      test=develop
      
      * FC MKL-DNN: Disable transpose in unit test
      
      test=develop
      
      * FC MKL-DNN: Remove fc_mkldnn_pass from default list
      
      * Correct Flag for fake data analyzer tests
      
      test=develop
      
      * FC MKL-DNN: Add comment about fc mkldnn pass disablement
      
      test=develop
      
      * FC MKL-DNN: Disable fc in int8 tests
      
      test=develop
      0c39b97b
    • S
      Conv concat relu quantization (#17466) · 5b2a3c4b
      Sylwester Fraczek 提交于
      * add conv_concat_relu fuse
      
      test=develop
      
      * add test code
      
      test=develop
      
      * added missing include with unordered_map
      
      test=develop
      
      * review fixes for wojtuss
      
      test=develop
      
      * remove 'should (not) be fused' comment statements
      
      one of them was invalid anyway
      
      test=develop
      5b2a3c4b
  8. 22 5月, 2019 1 次提交
    • G
      Enable the convolution/relu6(bounded_relu) fusion for FP32 on Intel platform. (#17130) · 2281ebf0
      guomingz 提交于
      * Relu6 is the bottleneck op for Mobilenet-v2. As the mkldnn supports the conv/relu6 fusion, we implement it fusion via cpass way. Due to the int8 enabling for this fusion will be supported in MKLDNN v0.20, so this PR is focused on the fp32 optimization.
      
      Below table shows the benchmark(FPS) which measured on skx-8180(28 cores)
      Batch size | with fusion | without fusion
      -- | -- | --
      1 | 214.7 | 53.4
      50 | 1219.727 | 137.280
      
      test=develop
      
      * Fix the format issue
      
      test=develop
      
      * Add the missing nolint comments.
      
      test=develop
      
      * Fix the typos.
      
      test=develop
      
      * Register the conv_brelu_mkldnn_fuse_pass for the MKLDNN engine.
      
      test=develop
      
      * Adjust the indentation.
      
      test=develop
      
      * Add the test_conv_brelu_mkldnn_fuse_pass case.
      
      test=develop
      
      * Slightly update the code per Baidu comments.
      Let the parameter definition embedded into the code.
      That's will make the code easy to understand.
      
      test=develop
      2281ebf0
  9. 21 5月, 2019 1 次提交
  10. 20 5月, 2019 2 次提交
  11. 16 5月, 2019 1 次提交
  12. 09 5月, 2019 1 次提交
    • Z
      fix: (#17279) · 7a3bb061
      Zhaolong Xing 提交于
      1. infernce multi card occupy
      2. facebox model inference occupy too much
      test=develop
      7a3bb061
  13. 08 5月, 2019 1 次提交
  14. 07 5月, 2019 2 次提交
    • Cherry-pick benchmark related changes from release/1.4 (#17156) · a72dbe9a
      石晓伟 提交于
      * cherry-pick commit from 88770542
      
      * cherry-pick commit from 3f0b97df
      
      * cherry-pick from 16691:Anakin subgraph support yolo_v3 and faster-rcnn
      
      (cherry picked from commit 8643dbc2)
      
      * Cherry-Pick from 16662 : Anakin subgraph cpu support
      
      (cherry picked from commit 7ad182e1)
      
      * Cherry-pick from 1662, 16797.. : add anakin int8 support
      
      (cherry picked from commit e14ab180)
      
      * Cherry-pick from 16813 : change singleton to graph RegistBlock
      test=release/1.4
      
      (cherry picked from commit 4b9fa423)
      
      * Cherry Pick : 16837 Support ShuffleNet and MobileNet-v2
      
      Support ShuffleNet and MobileNet-v2, test=release/1.4
      
      (cherry picked from commit a6fb066f)
      
      * Cherry-pick : anakin subgraph add opt config layout argument #16846
      test=release/1.4
      
      (cherry picked from commit 8121b3ec)
      
      * 1. add shuffle_channel_detect
      
      (cherry picked from commit 6efdea89)
      
      * update shuffle_channel op convert, test=release/1.4
      
      (cherry picked from commit e4726a06)
      
      * Modify symbol export rules
      
      test=develop
      a72dbe9a
    • L
      call SetNumThreads everytime to avoid missing omp thread setting (#17224) · 54636a19
      Leo Zhao 提交于
      * call SetNumThreads everytime to avoid missing omp thread setting
      
      resolve #17153
      test=develop
      
      * add paddle_num_threads into config for test_analyzer_pyramid_dnn
      
      resolve #17153
      test=develop
      54636a19
  15. 05 5月, 2019 1 次提交
  16. 23 4月, 2019 1 次提交
  17. 22 4月, 2019 1 次提交
    • W
      add parallel build script to ci … (#16901) · d9991dcc
      wopeizl 提交于
      * add parallel build script to ci test=develop
      * 1. classify the test case as single card/two cards/multiple cards type
         2. run test case according to the run type
      d9991dcc
  18. 19 4月, 2019 1 次提交
  19. 15 4月, 2019 1 次提交
  20. 12 4月, 2019 1 次提交
  21. 11 4月, 2019 1 次提交
  22. 09 4月, 2019 1 次提交
  23. 03 4月, 2019 3 次提交
  24. 02 4月, 2019 2 次提交
  25. 29 3月, 2019 1 次提交
  26. 28 3月, 2019 3 次提交
  27. 26 3月, 2019 1 次提交
  28. 25 3月, 2019 1 次提交