1. 15 10月, 2018 1 次提交
    • C
      Add check for opt op (#13840) · 8e2fdc54
      chengduo 提交于
      * add check for opt op
      
      * fix opt op
      test=develop
      
      * fix test fail
      test=develop
      
      * fix optimization doc
      test=develop
      
      * test=develop
      8e2fdc54
  2. 08 5月, 2018 1 次提交
    • Y
      Clean OpProtoAndCheckerMaker · 0e78cb69
      Yu Yang 提交于
      Do not use ctor
      
      * Reduce line of codes.
      * We can use virtual function for Maker now.
      * The implementation does not care what maker holds, it is easier to
      refactor later.
      0e78cb69
  3. 06 5月, 2018 1 次提交
  4. 12 2月, 2018 1 次提交
  5. 10 2月, 2018 2 次提交
  6. 20 12月, 2017 1 次提交
  7. 12 12月, 2017 2 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
    • K
      Updating the Latex equation for Adagrad (#6009) · 35420cdf
      kavyasrinet 提交于
      * Updating the Latex equation for Adagrad
      
      * Fixing Latex euqations for adadelta, adam and adamax
      35420cdf
  8. 06 12月, 2017 1 次提交
  9. 21 11月, 2017 1 次提交
  10. 05 11月, 2017 1 次提交
  11. 20 10月, 2017 1 次提交
  12. 17 10月, 2017 1 次提交
  13. 10 10月, 2017 2 次提交
    • Y
      Fix compile error in develop branch · 92add2a2
      Yu Yang 提交于
      92add2a2
    • A
      Implementing the Adamax optimizer operator (#4538) · 4cb5bd90
      Abhinav Arora 提交于
      * Implementing the Adamax optimizer step operator
      * Adding unit tests for adamax_op
      
      * Changing learning rate and time step to inputs from attributes
      
      * Changing learning rate and time step to input(tensors)
      
      * Making the Adamax operator conform to naming convention
      
      * Removing Tensor<float> from comments
      
      * Rectifying the Adamax implementation
      
      * Changing Unit Test values and adding comments
      
      * Changing Unit Test to test multiple steps
      4cb5bd90