1. 11 7月, 2018 1 次提交
  2. 30 6月, 2018 1 次提交
  3. 28 6月, 2018 1 次提交
  4. 21 6月, 2018 3 次提交
    • J
      - MKLDNN Softmax Grad Op · 98f3ad3b
      Jacek Czaja 提交于
      - Added hash function inside of MKLDNN softmax op to be used as handle for primitives stroing in a
      context
      
      - Style fixes to softmax mkldnn op
      
      - Fixes after review
      
      - Coding style
      
      - Fix to style
      
      - style fixes
      
      - style fix
      
      - style fixes
      
      - Fix to cody style check
      
      - Rephrasing a comment
      
      fix t obroken merge
      
      Fixes to rebase
      
      Conflicts:
      	benchmark/fluid/models/machine_translation.py
      	cmake/external/mkldnn.cmake
      	paddle/fluid/operators/softmax_mkldnn_op.cc
      
      - Bumped revision of MKL-DNN up to have softmax backward primitive
      
      - Added choosing MKLDNN softmax grad operator
      
      - First reuse of softmax backward
      
      - Reinvented reusing for softmax
      
      - Fix to crash in reinvented reuse
      
      - Clang format fixes
      
      - Clang format fixes
      
      - Improved softmax mkldnn reuse mechanism
      
      - clang format fixes
      
      - Fix to broken merge
      
      - Fix
      98f3ad3b
    • T
      Revert "Merge pull request #11628 from PaddlePaddle/revert-11102-mozga-intel/Sum_mkldnn_layout" · d5fb8fa7
      tensor-tang 提交于
      This reverts commit 4d8e8ee2, reversing
      changes made to d6a9f005.
      d5fb8fa7
    • T
      Revert "MKLDNN layout: Support for sum operator" · 90780e22
      tensor-tang 提交于
      90780e22
  5. 19 6月, 2018 1 次提交
  6. 07 6月, 2018 1 次提交
    • M
      Mkldnn layout (#11040) · 3ff9ba0e
      mozga-intel 提交于
      * Add MKLDNN layout support in Paddle
      
      Add MKLDNN layout in Paddle so that MKLDNN friendly memory layout
      can be used in MKLDNN enabled OP kernel. Before this commit, NCHW
      is hardcode to be used in all MKLDNN op kernels. As a result,
      non-optimized execution path is selected in MKLDNN primitive which
      bring worse performance.
      Besides framework change, three MKLDNN OP kernels were updated
      for using new MKLDNN layout. They are conv/pool2d/batch_norm.
      Other MKLDNN OP kernels need be also updated in similar way to
      achieve best performance.
      
      * Add MKLDNN layout support in activation OP
      
      * Don't populate layout from input to output when kMKLDNN in
      
      * Refine pool mkldnn op kernel
      
      * MKLDNN layout
      
      * Remove the inferitance from tensor file
      
      * MKLDNN layout: refactoring
      
      * Remove additional #define to register new operator
      
      * Prepare mkldnn tests to work with layout
      3ff9ba0e
  7. 21 5月, 2018 1 次提交
  8. 17 5月, 2018 1 次提交
    • J
      - Draft of reuse of pooling mkldnn operator · 5f133305
      Jacek Czaja 提交于
      - Finished draft of pooling reusing of operators
      
      - Using gethash in PoolGrad added
      
      - Removed diagnostic
      
      - Added pool mkldnn grad reusing of primitives
      
      - Added diagnostic
      
      - Removed diagnostic
      
      - added dependency to mkldnn data type for pooling mkldnn
      
      - Added mkldnn memory data type determining based on template type of op
      
      - Compilation warning fix
      
      - codying style fixes
      5f133305
  9. 17 4月, 2018 1 次提交
  10. 10 4月, 2018 1 次提交
  11. 23 3月, 2018 2 次提交
  12. 07 3月, 2018 1 次提交
  13. 12 2月, 2018 1 次提交
  14. 10 2月, 2018 1 次提交
  15. 05 1月, 2018 1 次提交
  16. 03 1月, 2018 2 次提交
  17. 04 7月, 2017 1 次提交
  18. 29 6月, 2017 2 次提交
  19. 25 5月, 2017 1 次提交
  20. 09 12月, 2016 1 次提交
  21. 29 8月, 2016 1 次提交