- 24 12月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* combine reduce_cuda codes * support float16 in pten redcue_mean * replace ReduceCudaKernel impl with pten reduce impl * mv reduce funcs into reduce_cuda_impl * rm unsed codes and headers * mv GetReduceDim into reduce_cuda_impl * recover GetReduceDim in reduce_op.h * add new dispatch macro * fix pool op output not inited and cause transform to pten::denseTensor error * fix output tensor not initialized error * rename new dispatch macro and format code style * rm reduce_functor_op.h file
-
- 17 12月, 2021 1 次提交
-
-
由 niuliling123 提交于
-
- 03 12月, 2021 1 次提交
-
-
由 ronnywang 提交于
* refine structure for cuda and rocm * update * update * update * update
-
- 12 11月, 2021 1 次提交
-
-
由 zhangkaihuo 提交于
* fix bug: 1. atten: set the default value of attn_dropout_rate to None 2. ffn: add activation parameter
-
- 21 10月, 2021 1 次提交
-
-
由 niuliling123 提交于
* Update the implement of reduceAnyKernel according to kernel primitive api * Fix a bug in ReadData, ReadDataBc and ReadDataReduce when NX != 1
-
- 23 9月, 2021 1 次提交
-
-
由 Li Min 提交于
-
- 14 9月, 2021 1 次提交
-
-
由 Yiqun Liu 提交于
Implement FunctionTraits to support two kinds of elementwise functor and remove some old codes for broadcast. (#35688)
-
- 13 9月, 2021 2 次提交
- 08 9月, 2021 1 次提交
-
-
由 niuliling123 提交于
-
- 03 9月, 2021 1 次提交
-
-
由 Yiqun Liu 提交于
-
- 26 8月, 2021 1 次提交
-
-
由 Li Min 提交于
Describe Add feed_forward for fused attention op. (1) Encapsulate matmul impl (forward and backward) used in attention op. (2) Implement bias_add (forward and backward) used in attention op.
-