- 21 10月, 2019 1 次提交
-
-
由 石晓伟 提交于
* update the infer shape of matmul, test=release/1.6 * add unittests of matmul, test=release/1.6 * change func names, test=release/1.6
-
- 14 10月, 2019 1 次提交
-
-
由 石晓伟 提交于
* add data type check, test=develop * polish error messages, test=develop * polish error messages, test=develop * Remove support for the CPU architecture matmul, test=develop * fix syntax bug, test=develop
-
- 25 9月, 2019 1 次提交
-
-
由 Bob Zhu 提交于
* add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * add support of matmul with multiple head even different width and height Original matmul with multiple head supports only the mat_a.width == mat_b.height, in that case, mat_b will be horizontally split. In this patch, we extend the support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height, in this case, mab_b will be vertically split. One example is A is [3, 8], B is [2, 16], head_number is 4. In this case, A will be split as [3, 2], B will be (vertically) split as [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16] test=develop * refactor the code of matmul with multiple head even different width and height test=develop
-
- 24 7月, 2019 1 次提交
-
-
由 Bob Zhu 提交于
* extend matmul op to support multiple head multiplication With the support of multiple head, the multiplication of two big matrixes is split into multiplication of several (head_number) small matrixes. e.g. if Mat A is [3, 24] and Mat B is [24, 4], when multiple A and B with head_number as 4, Mat A will be split as 4 matrix of [3, 6] and Mat B will be 4 matrix of [6, 4]. The result of final matrix will be 4 matrix of [3, 4], i.e. [3, 16].
-
- 21 3月, 2019 1 次提交
-
-
由 phlrain 提交于
-
- 18 9月, 2018 1 次提交
-
-
由 sneaxiy 提交于
-
- 17 9月, 2018 1 次提交
-
-
由 sneaxiy 提交于
-
- 10 5月, 2018 1 次提交
-
-
由 yuyang18 提交于
-
- 08 5月, 2018 2 次提交
- 07 5月, 2018 1 次提交
-
-
由 Yu Yang 提交于
-
- 19 4月, 2018 1 次提交
-
-
由 Yang Yang(Tony) 提交于
* script to add semicolon * fix typo
-
- 17 4月, 2018 1 次提交
-
-
由 Yang Yang 提交于
-
- 12 4月, 2018 1 次提交
-
-
由 Siddharth Goyal 提交于
* Fix cpplint errors, round2 * Fix pointer issue
-
- 12 2月, 2018 1 次提交
-
-
由 qingqing01 提交于
-
- 10 2月, 2018 2 次提交
- 21 1月, 2018 1 次提交
-
-
由 chengduoZH 提交于
-
- 19 1月, 2018 1 次提交
-
-
由 chengduoZH 提交于
-
- 18 1月, 2018 3 次提交
-
-
由 chengduoZH 提交于
-
由 chengduoZH 提交于
-
由 chengduoZH 提交于
-
- 20 12月, 2017 1 次提交
-
-
由 Yu Yang 提交于
* Move framework.proto to proto namespace * Fix compile * Fix compile * Fix Compile
-
- 12 12月, 2017 1 次提交
-
-
由 QI JUN 提交于
There are mainly following fixes: - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place` - remove `eigen_device` interface in base class `DeviceContext` - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext` - remove unused `platform::EigenDeviceConverter` - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL` - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
-
- 05 11月, 2017 1 次提交
-
-
由 kexinzhao 提交于
* fix m_ops * fix activation op
-
- 18 10月, 2017 1 次提交
-
-
由 Markus Kliegl 提交于
* initial matmul operator Similar to np.matmul, but also has transpose_X and transpose_Y flags, and only supports tensors from rank 1 to 3 inclusive. For GPU, uses cublas?gemmStridedBatched. For CPU, uses cblas_?gemm_batch if available via MKL; otherwise a simple serial implementation that loops over the batch dimension is employed for now.
-