- 26 2月, 2021 1 次提交
-
-
由 Wilber 提交于
-
- 04 2月, 2021 1 次提交
-
-
由 石晓伟 提交于
-
- 20 1月, 2021 1 次提交
-
-
由 QingshuChen 提交于
-
- 19 1月, 2021 1 次提交
-
-
由 taixiurong 提交于
* support transformer v2.0 * fix range op crash in dygraph xpu place
-
- 11 1月, 2021 1 次提交
-
-
由 QingshuChen 提交于
* add aarch64 and sunway kunlun lib * minor * optimize elementwise_add for kunlun * update kunlun dependence * minor * minor
-
- 29 12月, 2020 1 次提交
-
-
由 liuyuhui 提交于
* [Kunlun] PR1:Support one Kunlun card training in parallel executor (#29337) * [Kunlun] PR2: Support MultiDevicePass and BKCL in parallel executor (#29574) * [Kunlun] bug fix of PR2: Support MultiDevicePass and BKCL in parallel executor (#29926) * add bkcl.so in whl for kunlun (#29947) * [Kunlun] bug fix of PR2: Support MultiDevicePass and BKCL in parallel executor (#29961) Co-authored-by: NQingshuChen <qingshu.chen714@gmail.com>
-
- 17 12月, 2020 1 次提交
-
-
由 TTerror 提交于
* fix expand && concat/transpose to new api * update xpu_header * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * add nearest_interp on kunlun * update error message
-
- 16 12月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support roi_align & affine_channel for kunlun * minor
-
- 15 12月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support mobilenet for kunlun (#29458) * add xpu ops for training transformer in kunlun (#29539) * 1.fix matmul bug 2. add one hot * add xpu error msg Co-authored-by: Nprocr <procrboo@gmail.com> Co-authored-by: Ntaixiurong <taixiurong@126.com>
-
- 08 12月, 2020 1 次提交
-
-
由 liuyuhui 提交于
* add deformable_conv op on xpu (#29234) * rebase develop * update deformable_conv op on xpu * update deformable_conv op on xpu * update kunlun conv2d/softmax/elementwise implemetation (#29229) * update conv2d & softmax to new xpu api * test=kunlun * remove useless comments * test=kunlun * remote softmax xpu op * test=kunlun * update kunlun softmax * test=kunlun * update xpu unitest * test=kunlun * fix elementwise_grad bug for kunlun *test=kunlun * support global pooling for kunlun (#29293) * test=kunlun * update reduce_sum op on xpu (#29367) * update reduce_sum op on xpu * update reduce_sum op on xpu * support running on xpu * fix expand/uniform_random && concat/transpose to new api on xpu (#29280) * fix expand && concat/transpose to new api * update uniform_random_op * update xpu_header * 1. fix elementwise ops'bug 2. fix softmax_with_cross_entropy_op 3. add biliner_interp_op (#29448) Co-authored-by: Nroot <root@bjhw-sys-rpm0223.bjhw.baidu.com> Co-authored-by: N卖鱼的哲学 <tangzhiyi11@users.noreply.github.com> Co-authored-by: NQingshuChen <qingshu.chen714@gmail.com> Co-authored-by: Ntaixiurong <taixiurong@126.com> Co-authored-by: Nroot <root@bjhw-sys-rpm0223.bjhw.baidu.com>
-
- 20 11月, 2020 1 次提交
-
-
由 taixiurong 提交于
* 1.add xpu slice op 2. add xpu top_k op 3.modify xpu cast to new api * 1.add xpu slice op 2. add xpu top_k op 3.modify xpu cast to new api
-
- 06 11月, 2020 1 次提交
-
-
由 QingshuChen 提交于
*test=kunlun
-
- 27 9月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support elementwise add, activation, matmul on Baidu Kunlun * test=kunlun * minor * test=kunlun * reconstuct the xpu directory * test=kunlun * minor * test=kunlun * minor * test=kunlun * minor * test=kunlun * minor * test=kunlun * minor * test=kunlun
-
- 21 8月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support Baidu AI Accelerator * test=kunlun * minor * test=kunlun * support xpu op in separate file * test=kunlun * update XPU error message and remove duplicated code * test=kunlun * minor * test=kunlun * minor * test=kunlun
-