1. 15 12月, 2020 1 次提交
  2. 01 12月, 2020 1 次提交
  3. 25 11月, 2020 1 次提交
  4. 14 10月, 2020 1 次提交
  5. 17 9月, 2020 1 次提交
  6. 14 9月, 2020 1 次提交
  7. 03 9月, 2020 1 次提交
  8. 21 8月, 2020 1 次提交
    • Q
      support Baidu Kunlun AI Accelerator (#25959) · 138ecf24
      QingshuChen 提交于
      * support Baidu AI Accelerator
        * test=kunlun
      
      * minor
       * test=kunlun
      
      * support xpu op in separate file
       * test=kunlun
      
      * update XPU error message and remove duplicated code
      
       * test=kunlun
      
      * minor
       * test=kunlun
      
      * minor
       * test=kunlun
      138ecf24
  9. 03 6月, 2020 1 次提交
  10. 12 12月, 2018 1 次提交
  11. 30 9月, 2018 1 次提交
  12. 03 9月, 2018 1 次提交
  13. 31 8月, 2018 1 次提交
  14. 27 8月, 2018 1 次提交
  15. 20 6月, 2018 1 次提交
  16. 16 5月, 2018 1 次提交
  17. 04 5月, 2018 1 次提交
  18. 03 5月, 2018 1 次提交
  19. 28 4月, 2018 1 次提交
  20. 25 4月, 2018 2 次提交
  21. 27 3月, 2018 1 次提交
  22. 17 3月, 2018 1 次提交
  23. 16 3月, 2018 1 次提交
  24. 09 3月, 2018 1 次提交
    • K
      Add float16 GEMM math function on GPU (#8695) · 90215b78
      kexinzhao 提交于
      * test cpu float16 data transform
      
      * add isnan etc
      
      * small fix
      
      * fix containsNAN test error
      
      * add data_type transform GPU test
      
      * add float16 GPU example
      
      * fix error
      
      * fix GPU test error
      
      * initial commit
      
      * fix error
      
      * small fix
      
      * add more gemm fp16 tests
      
      * fix error
      
      * add utility function
      90215b78
  25. 07 3月, 2018 1 次提交
    • K
      Integrate float16 into data_type_transform (#8619) · 266ccaa8
      kexinzhao 提交于
      * test cpu float16 data transform
      
      * add isnan etc
      
      * small fix
      
      * fix containsNAN test error
      
      * add data_type transform GPU test
      
      * add float16 GPU example
      
      * fix error
      
      * fix GPU test error
      
      * add context wait
      266ccaa8
  26. 12 2月, 2018 1 次提交
  27. 10 2月, 2018 2 次提交
  28. 03 2月, 2018 1 次提交
  29. 02 1月, 2018 1 次提交
    • D
      Feature/transform (#7111) · 899a79cc
      dzhwinter 提交于
      * "fix data transform"
      
      * "data transformer"
      
      * "add device pool"
      
      * "add test"
      
      * "fix ci"
      
      * "fix datalayout implementation "
      
      * "fix based on comment"
      899a79cc
  30. 26 12月, 2017 2 次提交
  31. 25 12月, 2017 2 次提交
  32. 15 12月, 2017 1 次提交
  33. 12 12月, 2017 2 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
    • T
      unify MKL macro definition · 69b44f2f
      tensor-tang 提交于
      69b44f2f
  34. 27 11月, 2017 1 次提交
  35. 16 11月, 2017 1 次提交
    • Y
      feature/while_grad_op (#5554) · 18f0c40a
      Yang Yang(Tony) 提交于
      * first commit
      
      * Python API for while op
      
      * Python Unittest for simple while_op forward
      
      * fix out to be list
      
      * Fix UT
      
      * VarType
      
      * Fix several bugs
      
      * Fix bug
      
      * Fix bug
      
      * Fix Bug
      
      * Fix bug
      
      * Fix unittest
      
      * Remove debug log
      
      * Add comments
      
      * add PADDLE_ENFORCE
      
      * while_grad_op first commit
      
      * Add `BlockDescBind::FindRecursiveOrCreateVar()` and fix bugs
      
      * not sure how to setdim of while outputs
      
      * push for test
      
      * add executor vlog
      
      * fix bug of while_op cond
      
      * Several enhancement for code
      
      1. Backward always infer shape & infer var type. Since there are RENAME
      variables will be created when creating backward operator, but their
      shape & var types are not inferenced.
      2. Never use SomePtr-> directly, since every pointer could be nullptr if
      it is a function return value. Add `detail::Ref` to cast pointer to
      reference safely.
      3. Enhance error message for backward.
      4. Infer data type of variable in `sum` and `tensor_write`
      
      * Fix bugs of while_op gradient
      
      * Fix several bugs of while_op grad
      
      * fix fill zeros like
      
      * fix 3 >= 3
      
      * fix place holder shouldn't be null
      
      * fail on sum op
      
      * Fix SumOp of TensorList
      
      * clean up
      
      * pass while test
      
      * fix test_array_write_read
      
      * pass sum op
      
      * Support int/int64 for fill_constant_batch_size_like
      
      * Fix compile
      18f0c40a