- 22 2月, 2019 2 次提交
-
-
由 tensor-tang 提交于
* Revert "Optimze Gelu with MKL Erf function (#15770)" This reverts commit 676995c8. * test=develop
-
由 Yihua Xu 提交于
* Optimize for gelu operator * Set up the low accuracy mode of MKL ERF function. test=develop * Only enable MKLML ERF when OS is linux * Use the speical mklml version included vmsErf function to verify gelu mkl kernel. test=develop * Add the CUDA macro to avoid NVCC's compile issue. test=develop * Add the TODO comments for mklml library modification. test=develop * Clean Code test=develop * Add the comment of marco for NVCC compiler. test=develop
-
- 12 12月, 2018 1 次提交
-
-
由 Yibing Liu 提交于
* Fix the gelu backward to avoid nan test=develop * Remove unnecessary calls test=develop
-
- 05 12月, 2018 1 次提交
-
-
由 chengduo 提交于
* expose square test=develop * fix activation test=develop * Add square API test=develop * add necessary op * code refine * fix API.spec test=develop * fix unit test test=develop * add unit test sparse_grad_clip test=develop * fix API.spec test=develop * remove mac test for test_gradient_clip test=develop * remove selectedrows_mul_tensor test=develop
-
- 27 11月, 2018 1 次提交
-
-
由 Clementine 提交于
-
- 26 11月, 2018 1 次提交
-
-
由 minqiyang 提交于
test=develop
-
- 08 11月, 2018 1 次提交
-
-
由 minqiyang 提交于
Fix code to support cpplint syntax check test=develop
-
- 07 11月, 2018 1 次提交
-
-
由 chengduo 提交于
* add fp16 backward support test=develop * add sum_op fp16 test * disable test_dist_save_load test=develop * add check_grad for sum * add unit test for softmax_grad fp16 test=develop * add scale_op unit test * add mul_grad_op unit test for fp16 * add cross_entropy_grad and eman_grad unit test for fp16 test=develop * fix cross_entropy unit test * add pool2d fp16 unit test * refine conv2d fp16 unit test test=develop * refine activation unit test test=develop * fix ci test=develop * follow zhihong's comment, copy from https://github.com/PaddlePaddle/Paddle/pull/12796 test=develop
-
- 03 9月, 2018 1 次提交
-
-
由 dzhwinter 提交于
-
- 25 8月, 2018 1 次提交
-
-
由 dzhwinter 提交于
-
- 17 8月, 2018 1 次提交
-
- 16 8月, 2018 1 次提交
-
-
由 dzhwinter 提交于
* "cherry picked operators changes" * "remove duplicated code" * "add constant setter" * "add get expected kernel" * "fix ci" * "add fill constant"
-
- 25 6月, 2018 1 次提交
-
- 12 6月, 2018 1 次提交
-
-
由 sneaxiy 提交于
-
- 16 4月, 2018 1 次提交
-
-
由 dzhwinter 提交于
-
- 10 4月, 2018 1 次提交
-
-
由 Kexin Zhao 提交于
-
- 09 4月, 2018 1 次提交
-
-
由 Abhinav Arora 提交于
-
- 29 3月, 2018 1 次提交
-
-
由 chengduoZH 提交于
-
- 28 3月, 2018 1 次提交
-
-
由 chengduoZH 提交于
-
- 23 3月, 2018 3 次提交
-
-
由 Krzysztof Binias 提交于
-
由 Krzysztof Binias 提交于
-
由 Krzysztof Binias 提交于
-
- 21 3月, 2018 1 次提交
-
-
由 Kexin Zhao 提交于
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 10 2月, 2018 2 次提交
- 29 1月, 2018 1 次提交
-
-
由 Qiao Longfei 提交于
-
- 03 1月, 2018 1 次提交
-
-
由 Yang Yu 提交于
-
- 26 12月, 2017 2 次提交
- 12 12月, 2017 1 次提交
-
-
由 QI JUN 提交于
There are mainly following fixes: - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place` - remove `eigen_device` interface in base class `DeviceContext` - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext` - remove unused `platform::EigenDeviceConverter` - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL` - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
-
- 07 12月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
-
- 26 11月, 2017 1 次提交
-
-
由 dzhwinter 提交于
* "add floor, ceil, round op" * "reuse zero gradient" * "fix divide zero" * "fix numpy floor error"
-
- 03 11月, 2017 1 次提交
-
-
由 Kexin Zhao 提交于
-
- 31 10月, 2017 1 次提交
-
-
由 QI JUN 提交于
* reimplement pow operator * add pow_grad operator * fix code style * fix build error * fix op_test bug * revert pow operator * add FIXME comment
-
- 27 10月, 2017 1 次提交
-
-
由 Yu Yang 提交于
* Simplize Gradient Check * Stash * Extract apply_backward_pass to backward.py Rename apply_backward_pass to append_backward_ops * Use graph API to check gradient * Fix ci * Fix CI * Fix backward for double precision * Stash * Fix CI * Fix ci * Ignore GRU test * Ignore xe op * Fix CI * Fix softmax with xe gradient The correct equation should be IG = OG * (d_softmax_with_xe()) * Fix typo * Fix merge error * Disable LRN
-
- 13 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* Adding Hard Sigmoid Activation * Adding a comment for slope to be only positive * Fixing grammatical mistake in comment
-
- 12 10月, 2017 1 次提交
-
-
由 Abhinav Arora 提交于
* Adding thresholded_relu op * Adding test for thresholded relu op
-
- 11 10月, 2017 2 次提交
-
-
由 kexinzhao 提交于
* implementing softplus * small fix * small fix * small fix * small fix
-
由 kavyasrinet 提交于
* Implemented the hardShrink activation * Fixing the unit test
-