1. 01 8月, 2018 1 次提交
  2. 31 7月, 2018 1 次提交
  3. 21 6月, 2018 1 次提交
    • J
      - MKLDNN Softmax Grad Op · 98f3ad3b
      Jacek Czaja 提交于
      - Added hash function inside of MKLDNN softmax op to be used as handle for primitives stroing in a
      context
      
      - Style fixes to softmax mkldnn op
      
      - Fixes after review
      
      - Coding style
      
      - Fix to style
      
      - style fixes
      
      - style fix
      
      - style fixes
      
      - Fix to cody style check
      
      - Rephrasing a comment
      
      fix t obroken merge
      
      Fixes to rebase
      
      Conflicts:
      	benchmark/fluid/models/machine_translation.py
      	cmake/external/mkldnn.cmake
      	paddle/fluid/operators/softmax_mkldnn_op.cc
      
      - Bumped revision of MKL-DNN up to have softmax backward primitive
      
      - Added choosing MKLDNN softmax grad operator
      
      - First reuse of softmax backward
      
      - Reinvented reusing for softmax
      
      - Fix to crash in reinvented reuse
      
      - Clang format fixes
      
      - Clang format fixes
      
      - Improved softmax mkldnn reuse mechanism
      
      - clang format fixes
      
      - Fix to broken merge
      
      - Fix
      98f3ad3b
  4. 11 6月, 2018 1 次提交
  5. 07 6月, 2018 1 次提交
    • M
      Mkldnn layout (#11040) · 3ff9ba0e
      mozga-intel 提交于
      * Add MKLDNN layout support in Paddle
      
      Add MKLDNN layout in Paddle so that MKLDNN friendly memory layout
      can be used in MKLDNN enabled OP kernel. Before this commit, NCHW
      is hardcode to be used in all MKLDNN op kernels. As a result,
      non-optimized execution path is selected in MKLDNN primitive which
      bring worse performance.
      Besides framework change, three MKLDNN OP kernels were updated
      for using new MKLDNN layout. They are conv/pool2d/batch_norm.
      Other MKLDNN OP kernels need be also updated in similar way to
      achieve best performance.
      
      * Add MKLDNN layout support in activation OP
      
      * Don't populate layout from input to output when kMKLDNN in
      
      * Refine pool mkldnn op kernel
      
      * MKLDNN layout
      
      * Remove the inferitance from tensor file
      
      * MKLDNN layout: refactoring
      
      * Remove additional #define to register new operator
      
      * Prepare mkldnn tests to work with layout
      3ff9ba0e
  6. 08 5月, 2018 1 次提交
    • Y
      Clean OpProtoAndCheckerMaker · 0e78cb69
      Yu Yang 提交于
      Do not use ctor
      
      * Reduce line of codes.
      * We can use virtual function for Maker now.
      * The implementation does not care what maker holds, it is easier to
      refactor later.
      0e78cb69
  7. 03 5月, 2018 1 次提交
    • D
      Fix/fp64 (#10346) · f63ff90b
      dzhwinter 提交于
      * "fix double type error"
      
      * "fix ci"
      
      * "softmax fp64"
      
      * "fix momentum"
      
      * "fix ci"
      f63ff90b
  8. 19 4月, 2018 1 次提交
  9. 17 4月, 2018 2 次提交
  10. 07 4月, 2018 1 次提交
  11. 21 3月, 2018 3 次提交
    • J
      - Softmax MKLDNN primitive integration · 3b95b55f
      Jacek Czaja 提交于
      removed diagnostic
      
      - Added Unit tests for Softmax MKLDNN Forward
      
      Added fix for div by 0 to happen in cross_entropy backward
      
      Conflicts:
      	paddle/fluid/operators/CMakeLists.txt
      
      - Cosmetic fixes to SoftMax MKLDNN fluid operator
      
      Added misssing softmax fluid operator file
      
      Disabled MKLDNN softmax operator by default
      
      Fix to softmax op unittest merge
      
      clang_formater fixes
      
      clang_formatter fixes
      
      - Name changing of softmax mkldnn operator to maintin consistency
        across codebase
      
      - updated comment
      
      fix to comment
      3b95b55f
    • K
      small fix · b7801b9f
      Kexin Zhao 提交于
      b7801b9f
    • K
      initial commit · 70e71227
      Kexin Zhao 提交于
      70e71227
  12. 15 3月, 2018 1 次提交
    • D
      [Speed]implement cudnn sequence softmax cudnn (#8978) · 128adf53
      dzhwinter 提交于
      * "add softmax cudnn functor support"
      
      * "add testing"
      
      * "refine cmakelist"
      
      * "sequence softmax forward speed up"
      
      * "add softmax grad"
      
      * "fix sequence softmax test"
      
      * "add double precision'
      
      * "fix softmax test"
      
      * "add softmax cudnn support"
      
      * "fix softmax cudnn test"
      
      * "add softmax to nn.py"
      
      * "fix compile bug"
      
      * "refine cmakelist"
      
      * "fix ci"
      
      * "fix based on comment"
      
      * "fix based on comments"
      
      * "fix ci"
      128adf53
  13. 12 2月, 2018 1 次提交
  14. 10 2月, 2018 2 次提交
  15. 10 1月, 2018 1 次提交
  16. 26 12月, 2017 1 次提交
  17. 20 12月, 2017 1 次提交
  18. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  19. 23 11月, 2017 1 次提交
  20. 05 11月, 2017 1 次提交
  21. 17 10月, 2017 1 次提交
  22. 07 10月, 2017 1 次提交
  23. 27 9月, 2017 1 次提交
    • Q
      Refactoring InferShape (#3946) · 9a9d50a6
      Qiao Longfei 提交于
      * init Infershape
      
      * add static InferShape interface
      
      * refactor add-op infershape
      
      * add AttrReader
      
      * add all maker's infershape
      
      * add all InferShape
      
      * add python infer api
      
      * add VarDesc interface
      
      * add python VarDesc and OpDesc interface
      
      * update python code
      
      * use infershape function to do shape inference
      
      * clean code
      
      * do not use pointer
      
      * refine code of op_proto_maker
      
      * add get_dims to VarDesc
      
      * refine the code
      
      * remove the dependency from operator to op registry
      
      * remove OpProtoAndCheckerMaker from operator
      
      * restore complete_add_op
      
      * add shape_infer_impl.h
      
      * code optimization
      
      * remove const return value
      
      * add fake BlockDesc class
      
      * optimize code
      
      * remove infer function in op_info
      
      * move InferShapeContextImpl to operator.h
      
      * optimize the interface of InferShapeContextBase
      
      * add temperary interface of new infershape
      
      * change add_op, clip_op, conv2d_op and activation_op
      
      * change all operators InferShape
      
      * fix SetDim
      
      * update cos_sim_op
      
      * update crop_op
      
      * update lookup_table_op
      
      * allocate tensor when call GetDim in InferShapeContext
      
      * update modified_huber_loss_op
      
      * update rowwise_add_op
      
      * update mean_op
      
      * update sequence_avg_pool_op
      
      * typo
      
      * remove old InferShape interface
      
      * can compile
      
      * fix or unit test
      
      * clean code
      
      * clean code
      
      * remove const before InferShapeContext
      
      * change InferenceContextBase to pointer
      
      * rename RunTime to Runtime, code clean
      9a9d50a6
  24. 21 9月, 2017 1 次提交
  25. 15 9月, 2017 1 次提交
  26. 13 9月, 2017 1 次提交
  27. 07 9月, 2017 1 次提交
  28. 06 9月, 2017 1 次提交
  29. 05 9月, 2017 2 次提交
  30. 03 9月, 2017 1 次提交
  31. 12 8月, 2017 5 次提交