1. 21 3月, 2018 3 次提交
    • J
      - Softmax MKLDNN primitive integration · 3b95b55f
      Jacek Czaja 提交于
      removed diagnostic
      
      - Added Unit tests for Softmax MKLDNN Forward
      
      Added fix for div by 0 to happen in cross_entropy backward
      
      Conflicts:
      	paddle/fluid/operators/CMakeLists.txt
      
      - Cosmetic fixes to SoftMax MKLDNN fluid operator
      
      Added misssing softmax fluid operator file
      
      Disabled MKLDNN softmax operator by default
      
      Fix to softmax op unittest merge
      
      clang_formater fixes
      
      clang_formatter fixes
      
      - Name changing of softmax mkldnn operator to maintin consistency
        across codebase
      
      - updated comment
      
      fix to comment
      3b95b55f
    • K
      small fix · b7801b9f
      Kexin Zhao 提交于
      b7801b9f
    • K
      initial commit · 70e71227
      Kexin Zhao 提交于
      70e71227
  2. 15 3月, 2018 1 次提交
    • D
      [Speed]implement cudnn sequence softmax cudnn (#8978) · 128adf53
      dzhwinter 提交于
      * "add softmax cudnn functor support"
      
      * "add testing"
      
      * "refine cmakelist"
      
      * "sequence softmax forward speed up"
      
      * "add softmax grad"
      
      * "fix sequence softmax test"
      
      * "add double precision'
      
      * "fix softmax test"
      
      * "add softmax cudnn support"
      
      * "fix softmax cudnn test"
      
      * "add softmax to nn.py"
      
      * "fix compile bug"
      
      * "refine cmakelist"
      
      * "fix ci"
      
      * "fix based on comment"
      
      * "fix based on comments"
      
      * "fix ci"
      128adf53
  3. 12 2月, 2018 1 次提交
  4. 10 2月, 2018 2 次提交
  5. 10 1月, 2018 1 次提交
  6. 26 12月, 2017 1 次提交
  7. 20 12月, 2017 1 次提交
  8. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  9. 23 11月, 2017 1 次提交
  10. 05 11月, 2017 1 次提交
  11. 17 10月, 2017 1 次提交
  12. 07 10月, 2017 1 次提交
  13. 27 9月, 2017 1 次提交
    • Q
      Refactoring InferShape (#3946) · 9a9d50a6
      Qiao Longfei 提交于
      * init Infershape
      
      * add static InferShape interface
      
      * refactor add-op infershape
      
      * add AttrReader
      
      * add all maker's infershape
      
      * add all InferShape
      
      * add python infer api
      
      * add VarDesc interface
      
      * add python VarDesc and OpDesc interface
      
      * update python code
      
      * use infershape function to do shape inference
      
      * clean code
      
      * do not use pointer
      
      * refine code of op_proto_maker
      
      * add get_dims to VarDesc
      
      * refine the code
      
      * remove the dependency from operator to op registry
      
      * remove OpProtoAndCheckerMaker from operator
      
      * restore complete_add_op
      
      * add shape_infer_impl.h
      
      * code optimization
      
      * remove const return value
      
      * add fake BlockDesc class
      
      * optimize code
      
      * remove infer function in op_info
      
      * move InferShapeContextImpl to operator.h
      
      * optimize the interface of InferShapeContextBase
      
      * add temperary interface of new infershape
      
      * change add_op, clip_op, conv2d_op and activation_op
      
      * change all operators InferShape
      
      * fix SetDim
      
      * update cos_sim_op
      
      * update crop_op
      
      * update lookup_table_op
      
      * allocate tensor when call GetDim in InferShapeContext
      
      * update modified_huber_loss_op
      
      * update rowwise_add_op
      
      * update mean_op
      
      * update sequence_avg_pool_op
      
      * typo
      
      * remove old InferShape interface
      
      * can compile
      
      * fix or unit test
      
      * clean code
      
      * clean code
      
      * remove const before InferShapeContext
      
      * change InferenceContextBase to pointer
      
      * rename RunTime to Runtime, code clean
      9a9d50a6
  14. 21 9月, 2017 1 次提交
  15. 15 9月, 2017 1 次提交
  16. 13 9月, 2017 1 次提交
  17. 07 9月, 2017 1 次提交
  18. 06 9月, 2017 1 次提交
  19. 05 9月, 2017 2 次提交
  20. 03 9月, 2017 1 次提交
  21. 12 8月, 2017 5 次提交
  22. 08 8月, 2017 3 次提交
  23. 07 8月, 2017 2 次提交
  24. 05 8月, 2017 1 次提交
  25. 04 8月, 2017 3 次提交
  26. 03 8月, 2017 1 次提交
    • Q
      Softmax grad op (#3164) · d953611e
      Qiao Longfei 提交于
      * init softmax grad op
      
      * add compute code
      
      * export Backward to python
      
      * update test ,export op.type to python
      
      * update python test, fix compute bug
      
      * update unit test
      
      * use eigen
      
      * optimize eigen code
      
      * add gpu test
      
      * register softmax_grad GPU kernel and fix test bug
      
      * typo
      
      * follow comments
      d953611e
  27. 01 8月, 2017 1 次提交
    • Q
      use operator context and infer context (#3024) · 61ebacbc
      Qiao Longfei 提交于
      * use operator context
      
      * optimize code
      
      * update net infershape
      
      * update InferShape
      
      * disable override InferShape(scope) in OperatorBase
      
      * change InferShapeImpl to InferShape
      
      * add template to OperatorContext Input/Output
      
      * merge Input InputVar, Output OutputVar
      
      * change Inputs to MultiInput
      
      * fix conflict
      
      * fix MultiInput bugs and add unit test
      
      * rename KernelContext to ExecutionContext
      
      * clean code
      
      * change InferShape to protected
      
      * fix template bug
      
      * refine code
      
      * use InputVar instead of Input<Variable>
      
      * typo
      
      * optimize code
      61ebacbc