- 17 3月, 2018 1 次提交
-
-
由 Kexin Zhao 提交于
-
- 16 3月, 2018 1 次提交
-
-
由 yangyaming 提交于
-
- 12 3月, 2018 1 次提交
-
-
由 Kexin Zhao 提交于
-
- 09 3月, 2018 1 次提交
-
-
由 kexinzhao 提交于
* test cpu float16 data transform * add isnan etc * small fix * fix containsNAN test error * add data_type transform GPU test * add float16 GPU example * fix error * fix GPU test error * initial commit * fix error * small fix * add more gemm fp16 tests * fix error * add utility function
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 10 2月, 2018 2 次提交
- 03 2月, 2018 1 次提交
-
-
由 chengduoZH 提交于
-
- 27 12月, 2017 1 次提交
-
-
由 qingqing01 提交于
-
- 26 12月, 2017 1 次提交
-
-
由 qingqing01 提交于
-
- 25 12月, 2017 2 次提交
- 18 12月, 2017 1 次提交
-
-
由 QI JUN 提交于
* add more place_test and rename Cudnn to CUDNN * fix ci
-
- 14 12月, 2017 1 次提交
-
-
由 dzhwinter 提交于
* "derived cudnnDevice context" * "leave remove cudnn handle from CUDADeviceContext" * "fix math function error"
-
- 12 12月, 2017 1 次提交
-
-
由 QI JUN 提交于
There are mainly following fixes: - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place` - remove `eigen_device` interface in base class `DeviceContext` - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext` - remove unused `platform::EigenDeviceConverter` - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL` - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
-
- 23 11月, 2017 1 次提交
-
-
由 dangqingqing 提交于
-
- 16 11月, 2017 1 次提交
-
-
由 Yang Yang(Tony) 提交于
* first commit * Python API for while op * Python Unittest for simple while_op forward * fix out to be list * Fix UT * VarType * Fix several bugs * Fix bug * Fix bug * Fix Bug * Fix bug * Fix unittest * Remove debug log * Add comments * add PADDLE_ENFORCE * while_grad_op first commit * Add `BlockDescBind::FindRecursiveOrCreateVar()` and fix bugs * not sure how to setdim of while outputs * push for test * add executor vlog * fix bug of while_op cond * Several enhancement for code 1. Backward always infer shape & infer var type. Since there are RENAME variables will be created when creating backward operator, but their shape & var types are not inferenced. 2. Never use SomePtr-> directly, since every pointer could be nullptr if it is a function return value. Add `detail::Ref` to cast pointer to reference safely. 3. Enhance error message for backward. 4. Infer data type of variable in `sum` and `tensor_write` * Fix bugs of while_op gradient * Fix several bugs of while_op grad * fix fill zeros like * fix 3 >= 3 * fix place holder shouldn't be null * fail on sum op * Fix SumOp of TensorList * clean up * pass while test * fix test_array_write_read * pass sum op * Support int/int64 for fill_constant_batch_size_like * Fix compile
-
- 14 11月, 2017 1 次提交
-
-
由 dangqingqing 提交于
-
- 13 11月, 2017 1 次提交
-
-
由 dangqingqing 提交于
-
- 11 11月, 2017 2 次提交
-
-
由 dangqingqing 提交于
-
由 emailweixu 提交于
TensorSetConstant struct is used both in math_function.cc and math_function.cu. Somehow the release version can correctly handle it. But in debug version, set_constant_with_place() in math_function.cu uses the TensorSetConstant in math_function.cc and causes crash.
-
- 08 11月, 2017 2 次提交
- 26 10月, 2017 1 次提交
-
-
由 dangqingqing 提交于
-
- 18 10月, 2017 1 次提交
-
-
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
-
- 16 10月, 2017 1 次提交
-
-
由 qijun 提交于
-
- 15 10月, 2017 4 次提交
- 21 9月, 2017 1 次提交
-
-
由 guosheng 提交于
-
- 19 9月, 2017 1 次提交
-
-
由 Yu Yang 提交于
* Also use `const DeviceContext&` all the time, to prevent `const_cast` Fix #4169 Fix #3468 Fix #3475
-
- 22 8月, 2017 2 次提交
- 21 8月, 2017 4 次提交
- 14 8月, 2017 2 次提交