1. 26 4月, 2023 1 次提交
  2. 31 3月, 2023 1 次提交
    • H
      register fluid kerenls to phi [part 2] (#52044) · d05b73e4
      huangjiyi 提交于
      * update bipartite_match
      
      * update
      
      * fix bug
      
      * fix test
      
      * fix bug
      
      * fix Kunlun-KP-Build
      
      * Revert "fix Kunlun-KP-Build"
      
      This reverts commit ceab63cc23079fd6839c826bb52db893fb056355.
      
      * update
      d05b73e4
  3. 19 12月, 2022 1 次提交
  4. 29 11月, 2022 1 次提交
  5. 04 11月, 2022 1 次提交
  6. 28 9月, 2022 1 次提交
    • C
      Remove the declaration of using Tensor in framework/tensor.h (#46432) · e12a905e
      Chen Weihang 提交于
      * remove needless using tensor
      
      * remove needless using tensor
      
      * resolve conflict
      
      * replace tensor using
      
      * fix format error
      
      * revert needless changing
      
      * fix rocm and npu compile error
      
      * fix cinn compile error
      
      * fix format error
      
      * fix mkldnn format error
      
      * fix mkldnn format error
      
      * fix cinn compile error
      
      * fix cinn compile error
      
      * fix cinn compile error
      
      * resolve conflict
      e12a905e
  7. 01 8月, 2022 1 次提交
    • L
      unify gpu context (#44740) · 86763023
      Leo Chen 提交于
      * remove cudaDeviceContext
      
      * remove more template
      
      * fix rocm compile
      
      * remove alias name CUDADeviceContext
      
      * fix compile
      
      * fix tests
      
      * revert changes
      86763023
  8. 28 7月, 2022 1 次提交
  9. 26 6月, 2022 1 次提交
  10. 01 6月, 2022 1 次提交
  11. 13 4月, 2022 1 次提交
  12. 15 2月, 2022 1 次提交
    • A
      [PTen]Migrate proto::VarType outside of Pten (#39411) · 7e7e9404
      Aurelius84 提交于
      * #1 migrate dist-related type()-> dtype()
      
      * move datatype function from pten -> fluid/framework
      
      * change type() in imperative into convert(dtype())
      
      * modify xx_tensor->type into xx_tensor->dtype
      
      * change the set_type interface and the caller
      
      * modify xx_tensor.type into xx_tensor.dtype
      
      * fix mutable_data(place, dtype())
      
      * change caller of mutable_data in pten and distributed
      
      * change the caller of mutable_data in fluid/framework
      
      * change the caller of mutable_data in imperative directory
      
      * mutable_data: inference
      
      * update the call of mutable_data
      
      * transfer MakePenScalarArray MakePtenScalar ResetHolderWithType
      
      * pass the compile. the next step is remove VarType in Pten
      
      * fix all and remove VarType from pten. success in linux. Next task is other platform
      
      * fix conflict with develop
      
      * fix compiled error
      
      * Fix reset conversion
      
      * fix conflict
      
      * fix compiled problem
      
      * fix typo
      
      * Fix << in tensor_utils.cc
      
      * fix type->dtype
      
      * fix unittest
      
      * fix tensor init constructor
      
      * fix DataTypeSize for BFloat16
      
      * fix code style
      
      * fix npu compiled error
      
      * fix npu
      
      * compile npu sucessfully
      
      * fix conflict
      
      * fix conflict
      Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
      7e7e9404
  13. 03 12月, 2021 1 次提交
  14. 24 2月, 2021 1 次提交
  15. 04 2月, 2021 1 次提交
  16. 30 9月, 2020 1 次提交
    • M
      fix distributed error info (#27206) · 20fb01fb
      MRXLT 提交于
      * fix distributed error info
      
      * bug fix; notest
      
      * error info refine
      
      * update error info
      
      * update error info
      
      * update error info
      
      * bug fix
      
      * bug fix
      
      * bug fix
      
      * bug fix
      20fb01fb
  17. 11 2月, 2020 1 次提交
  18. 27 8月, 2019 1 次提交
  19. 02 7月, 2019 1 次提交
    • Y
      supports collective training with programs (#18392) · a873fa84
      Yi Liu 提交于
      1. Since allreduce op has 4 reduce types, We split these four reduce types into four ops
      2. We also refined the collective op code, e.g. we separated the collective op kernel into CPUKernel and CUDAKernel, and remove the device specified DeviceContext parameter in template as we already knew the target DeviceContext
      3. We remove the newly added Collective op role to reduce the complexity of program and graph analysis
      a873fa84
  20. 27 6月, 2019 1 次提交
    • H
      supports collective communicated training (#18175) · b7128bac
      HaoRen 提交于
      * fix prepare context redundant code problem, optimize executor by caching create_varaiables
      test=develop
      
      * supports collective training in executor
      
      * make fetch_list runable with variables, add more unittest for use_program_cache
      test=develop
      
      * fix comment
      test=develop
      
      * use unique name for nccl_id
      
      * supports output to stream in program_to_code
      
      * insert sync_comm_stream before regularization; add skip_op_callstack capability in program_to_code
      
      * set op role in collective training
      
      * add collective op role
      
      * remove orig file
      
      * add build optimizer by strategy
      
      * add collective strategy
      
      * refine collective strategy
      
      * add multi-process role maker
      
      * refine strategy building factory so that we can easily plugin more strategy
      
      * scale loss grad in collective sgd transpiler
      
      * add support for distributed fc
      
      * code format
      
      * revert some features for dist fc
      
      * add support for distributed fc training
      
      * fix prepare context redundant code problem, optimize executor by caching create_varaiables
      test=develop
      
      * supports collective training in executor
      
      * make fetch_list runable with variables, add more unittest for use_program_cache
      test=develop
      
      * use unique name for nccl_id
      
      * supports output to stream in program_to_code
      
      * insert sync_comm_stream before regularization; add skip_op_callstack capability in program_to_code
      
      * set op role in collective training
      
      * add collective op role
      
      * fix comment
      test=develop
      
      * remove orig file
      
      * add build optimizer by strategy
      
      * add collective strategy
      
      * refine collective strategy
      
      * add multi-process role maker
      
      * refine strategy building factory so that we can easily plugin more strategy
      
      * scale loss grad in collective sgd transpiler
      
      * add support for distributed fc
      
      * code format
      
      * revert some features for dist fc
      
      * add support for distributed fc training
      
      * test=develop
      add collective op unittest standard
      
      * test=develop
      remove the test_collective directory
      
      * test=develop
      remove the test_collective directory
      
      * remove slicegather test
      
      * code format for reducescatter
      
      * update attr of shard_index_op
      
      * Modify macro nccl_helper
      
      * remove test without distribute
      
      * macro collective_helper
      
      * marcro update
      
      * test=develop
      update support python3.5
      
      * test=develop change gpu memory use to 0.1 when test
      
      * test=develop
      update ut equal func
      
      * test=develop
      set flags to 1.5
      
      * test=develop fix pickle dumple  py35
      
      * test=develop
      fix divide in slice and add sync_comm_stream
      update atol and rtol to 1e-05
      rm shard_index op and test
      modify read input from file to read from memory
      remove origin_program in framework and add i/o in c_sync_calc_stream
      
      * test=develop update unittest sync operator I/O
      b7128bac
  21. 03 4月, 2019 1 次提交
    • R
      Add Pixel shuffle OP (#15782) · 229dc932
      ruri 提交于
      * add pixel_shuffle op
      
      * add pixel_shuffle op, test=develop
      
      * rewrite code, test=develop
      
      * delete useless comment, test=develop
      
      * Refine pixel_shuffle_op and unit testing
      
      * refine code,test=develop
      
      * refine .cu,test=develop
      
      * fix unittest,test=develop
      
      * Fix unit testing
      test=develop
      
      * resolve conflict, test=develop
      
      * fix test, test=develop
      
      * fix API, test=develop
      
      * fix test datatype bug,test=develop
      
      * polish comments,test=develop
      
      * add API,test=develop
      
      * test=develop
      
      * Add Pixel_Shuffle OP,test=develop
      
      * support python3,test=develop
      
      * add include memory to travis CI bug,test=develop
      229dc932
  22. 21 3月, 2019 2 次提交
  23. 12 2月, 2018 1 次提交
  24. 10 2月, 2018 2 次提交
  25. 26 12月, 2017 1 次提交
  26. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  27. 03 11月, 2017 1 次提交
  28. 02 11月, 2017 1 次提交
  29. 13 10月, 2017 1 次提交
    • A
      Adding the Adam Optimizer operator (#4733) · 11680037
      Abhinav Arora 提交于
      * add adam op
      
      moment1_out = beta1 * moment1 + (1 − beta1) * grad
      moment2_out = beta2 * moment2 + (1 − beta2) * grad * grad
      moment1_hat =  moment1_out / (1 - beta1^t)
      moment2_hat =  moment2_out / (1 - beta2^t)
      param_out = param - learning_rate * moment1_hat / (sqrt(moment2_hat) +
      epsilon)
      
      * fix moment 2
      
      * Adding the Adam optimization operator
      
      * Adding more tests for Adam op
      11680037
  30. 07 8月, 2017 1 次提交
  31. 04 8月, 2017 1 次提交
  32. 31 7月, 2017 1 次提交
  33. 25 7月, 2017 1 次提交
  34. 19 7月, 2017 1 次提交