1. 11 9月, 2018 2 次提交
  2. 10 9月, 2018 1 次提交
  3. 21 8月, 2018 1 次提交
    • M
      Fuse Convolution and Eltwise Add into MKLDNN's Conv+Bias (#12669) · cd32ddac
      Michał Gallus 提交于
      * Fuse Convolution and Eltwise Add into Conv+Bias
      
      * Reduce bias branching at conv_mkldnn_op
      
      * Add MKLDNN build checks for Conv Bias
      
      * Conv-bias: check if bias input exist befor assignment
      
      * Conv-bias: Remove Bias dim check from infershape
      
      It was causing conv3d test to crash upon\ncalling HasInput(Bias)
      cd32ddac
  4. 09 8月, 2018 1 次提交
  5. 11 7月, 2018 1 次提交
  6. 30 6月, 2018 1 次提交
  7. 28 6月, 2018 1 次提交
  8. 21 6月, 2018 3 次提交
    • J
      - MKLDNN Softmax Grad Op · 98f3ad3b
      Jacek Czaja 提交于
      - Added hash function inside of MKLDNN softmax op to be used as handle for primitives stroing in a
      context
      
      - Style fixes to softmax mkldnn op
      
      - Fixes after review
      
      - Coding style
      
      - Fix to style
      
      - style fixes
      
      - style fix
      
      - style fixes
      
      - Fix to cody style check
      
      - Rephrasing a comment
      
      fix t obroken merge
      
      Fixes to rebase
      
      Conflicts:
      	benchmark/fluid/models/machine_translation.py
      	cmake/external/mkldnn.cmake
      	paddle/fluid/operators/softmax_mkldnn_op.cc
      
      - Bumped revision of MKL-DNN up to have softmax backward primitive
      
      - Added choosing MKLDNN softmax grad operator
      
      - First reuse of softmax backward
      
      - Reinvented reusing for softmax
      
      - Fix to crash in reinvented reuse
      
      - Clang format fixes
      
      - Clang format fixes
      
      - Improved softmax mkldnn reuse mechanism
      
      - clang format fixes
      
      - Fix to broken merge
      
      - Fix
      98f3ad3b
    • T
      Revert "Merge pull request #11628 from PaddlePaddle/revert-11102-mozga-intel/Sum_mkldnn_layout" · d5fb8fa7
      tensor-tang 提交于
      This reverts commit 4d8e8ee2, reversing
      changes made to d6a9f005.
      d5fb8fa7
    • T
      Revert "MKLDNN layout: Support for sum operator" · 90780e22
      tensor-tang 提交于
      90780e22
  9. 19 6月, 2018 1 次提交
  10. 07 6月, 2018 1 次提交
    • M
      Mkldnn layout (#11040) · 3ff9ba0e
      mozga-intel 提交于
      * Add MKLDNN layout support in Paddle
      
      Add MKLDNN layout in Paddle so that MKLDNN friendly memory layout
      can be used in MKLDNN enabled OP kernel. Before this commit, NCHW
      is hardcode to be used in all MKLDNN op kernels. As a result,
      non-optimized execution path is selected in MKLDNN primitive which
      bring worse performance.
      Besides framework change, three MKLDNN OP kernels were updated
      for using new MKLDNN layout. They are conv/pool2d/batch_norm.
      Other MKLDNN OP kernels need be also updated in similar way to
      achieve best performance.
      
      * Add MKLDNN layout support in activation OP
      
      * Don't populate layout from input to output when kMKLDNN in
      
      * Refine pool mkldnn op kernel
      
      * MKLDNN layout
      
      * Remove the inferitance from tensor file
      
      * MKLDNN layout: refactoring
      
      * Remove additional #define to register new operator
      
      * Prepare mkldnn tests to work with layout
      3ff9ba0e
  11. 21 5月, 2018 1 次提交
  12. 17 5月, 2018 1 次提交
    • J
      - Draft of reuse of pooling mkldnn operator · 5f133305
      Jacek Czaja 提交于
      - Finished draft of pooling reusing of operators
      
      - Using gethash in PoolGrad added
      
      - Removed diagnostic
      
      - Added pool mkldnn grad reusing of primitives
      
      - Added diagnostic
      
      - Removed diagnostic
      
      - added dependency to mkldnn data type for pooling mkldnn
      
      - Added mkldnn memory data type determining based on template type of op
      
      - Compilation warning fix
      
      - codying style fixes
      5f133305
  13. 17 4月, 2018 1 次提交
  14. 10 4月, 2018 1 次提交
  15. 23 3月, 2018 2 次提交
  16. 07 3月, 2018 1 次提交
  17. 12 2月, 2018 1 次提交
  18. 10 2月, 2018 1 次提交
  19. 05 1月, 2018 1 次提交
  20. 03 1月, 2018 2 次提交
  21. 04 7月, 2017 1 次提交
  22. 29 6月, 2017 2 次提交
  23. 25 5月, 2017 1 次提交
  24. 09 12月, 2016 1 次提交
  25. 29 8月, 2016 1 次提交