1. 04 12月, 2019 1 次提交
  2. 07 11月, 2019 1 次提交
  3. 18 10月, 2019 1 次提交
    • M
      MKL-DNN] Added mkl-dnn cache clearing when creating Executor instance (#20241) (#20693) · 2099618d
      Michał Gallus 提交于
      test=release/1.6
      
      * - Flushing mkl-dnn cache
      
      test=develop
      
      - Disabled clearing cache for LoadModel
      
      - Added clearing of mkl-dnn cache when Executor is created
      
      test=develop
      
      - Do not clear for GPU places
      
      test=develop
      
      - compilation fix
      
      test=develop
      
      * - Moved clearing of mkl-dnn cache in destructor of executor
      
      test=develop
      
      * - Compilation fix
      
      test=develop
      
      - Reverted conditional clearing of mkl-dnn cache in Executors's
        destructor
      
      test=develop
      
      - compilation fix
      2099618d
  4. 14 9月, 2019 1 次提交
  5. 04 9月, 2019 1 次提交
  6. 03 9月, 2019 1 次提交
  7. 29 8月, 2019 1 次提交
  8. 15 8月, 2019 1 次提交
  9. 10 6月, 2019 1 次提交
  10. 22 5月, 2019 1 次提交
    • G
      Enable the convolution/relu6(bounded_relu) fusion for FP32 on Intel platform. (#17130) · 2281ebf0
      guomingz 提交于
      * Relu6 is the bottleneck op for Mobilenet-v2. As the mkldnn supports the conv/relu6 fusion, we implement it fusion via cpass way. Due to the int8 enabling for this fusion will be supported in MKLDNN v0.20, so this PR is focused on the fp32 optimization.
      
      Below table shows the benchmark(FPS) which measured on skx-8180(28 cores)
      Batch size | with fusion | without fusion
      -- | -- | --
      1 | 214.7 | 53.4
      50 | 1219.727 | 137.280
      
      test=develop
      
      * Fix the format issue
      
      test=develop
      
      * Add the missing nolint comments.
      
      test=develop
      
      * Fix the typos.
      
      test=develop
      
      * Register the conv_brelu_mkldnn_fuse_pass for the MKLDNN engine.
      
      test=develop
      
      * Adjust the indentation.
      
      test=develop
      
      * Add the test_conv_brelu_mkldnn_fuse_pass case.
      
      test=develop
      
      * Slightly update the code per Baidu comments.
      Let the parameter definition embedded into the code.
      That's will make the code easy to understand.
      
      test=develop
      2281ebf0
  11. 16 4月, 2019 1 次提交
    • J
      [MKL-DNN] Added reusing of primitive descriptors (fp32) (#16667) · 87a44b11
      Jacek Czaja 提交于
      * - Reuse of conv PD
      
      - conv transpose pd reused
      
      - Added PD reusing of softmax and Batch Norm
      
      - Refactoring and removal of not needed routines of mkl-dnn ops
      
      test=develop
      
      - Fix to reusing conv
      
      test=develop
      
      - Lint fixes
      
      test=develop
      
      - Further lint fixes
      
      test=develop
      
      - Lint  fixes
      
      test=develop
      
      - lint fixes
      
      test=develop
      
      - Lint workaround
      
      test=develop
      
      * - Fix after review on including boost as third party header
      
      test=develop
      
      * - Fix after review. Name change to something more descriptive
      
      test=develop
      87a44b11
  12. 28 3月, 2019 1 次提交
    • J
      [MKL-DNN] Tensor modifications revert (#16462) · 26323274
      Jacek Czaja 提交于
      * Revert "[MKL-DNN] Fix to crash of Transformer when mkldnn is to be used (#16233)"
      
      This reverts commit 13816dd4.
      Apart from enabling transformer for MKL-DNN
      
      * Revert "- MKL-DNN pooling updated to set_prim_desc"
      
      This reverts commit c63f6b20.
      
      Conflicts:
      	paddle/fluid/operators/mkldnn/concat_mkldnn_op.cc
      
      * Revert "[MKL-DNN] MKL-DNN specific Tensor modification (#15429)"
      
      test=develop
      
      This reverts commit dec9cf53.
      
      * - concat compilation fix
      
      - lint
      
      test=develop
      
      - Lint fixes
      
      test=develop
      
      - Lint fixes
      
      test=develop
      
      - Fix Transpose MKLDNN op
      
      test=develop
      26323274
  13. 26 2月, 2019 1 次提交
    • J
      - MKL-DNN pooling updated to set_prim_desc · c63f6b20
      Jacek Czaja 提交于
      - MKLDNN ops revisited
      
      - disabled softmax modifications
      
      - disabled elementwise_add
      
      - reverted LRN modifications
      
      - reverted SUM primitive
      
      - Partial reviing of softmax
      
      - Enable softmax
      
      - Softmax changes
      
      - LRN is back
      
      - LRN partially disabled
      
      - LRN is back
      
      - LRN fix
      
      - compilation fixes
      
      - Sum fixed(hopefully)
      
      - Enabling (partially) elementwise_add
      
      - Fixes to elemenwise_add
      
      - Lint fixes
      
      quantize fix
      
      - compilation fix
      
      test=develop
      
      Disabling pooling
      
      - Disabled quantize op
      
      test=develop
      c63f6b20
  14. 29 1月, 2019 1 次提交
  15. 27 11月, 2018 1 次提交
    • J
      - conv2d transpose MKL-DNN · fb24690a
      Jacek Czaja 提交于
      test=develop
      
      - Added new header for MKLDNN reuse functionality
      
      - Extended conv2d_transpose GetExpectedKernelType for MKL-DNN supporrt
      
      - Buildable conv transpose mkldnn and conv mkldnn using conv template
      
      - Conv2d transpose roughlt implemented and buildable
      
      - Added modifications conv2d transpose MKLDNN unit tests
      
      - Fix to UT of conv2d transpose mkldnn op
      
      - Wrong type of MKLDNN primitive was chosen for conv2d transpose
      
      - HAcks for conv2d transpose
      
      - UT enalbed
      
      - Replaced copying loop with memcpy
      
      - Draft of passing lambda into AcquireMemory
      
      - Made reorder (IOHW->OIHW) to be called only once
      fb24690a