- 17 12月, 2020 3 次提交
-
-
由 ShenLiang 提交于
* Fix the dowanload bug in the case of multiple machines (#29551) * fix the dowanload bug * add sort for ips * Fix bug of matmul_v2 for broadcast case (#29599) * fix bug of matmul_v2 for broadcast * Rebuild group automatically in dynamic graph distributed (#29255) * add tensor_indices in AssignGroupBySize * add rebuild group in reducer * fix error message of gather nd (#29521)
-
由 arlesniak 提交于
fix #27935 (comment) by QA @OliverLPH (Could you add some MKLDNN-related print log when use FLAGS_use_mkldnn?)
-
由 TTerror 提交于
* fix expand && concat/transpose to new api * update xpu_header * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * update activation op on kunlun * add nearest_interp on kunlun * update error message
-
- 16 12月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support roi_align & affine_channel for kunlun * minor
-
- 15 12月, 2020 1 次提交
-
-
由 QingshuChen 提交于
* support mobilenet for kunlun (#29458) * add xpu ops for training transformer in kunlun (#29539) * 1.fix matmul bug 2. add one hot * add xpu error msg Co-authored-by: Nprocr <procrboo@gmail.com> Co-authored-by: Ntaixiurong <taixiurong@126.com>
-
- 08 12月, 2020 3 次提交
-
-
由 liuyuhui 提交于
* add deformable_conv op on xpu (#29234) * rebase develop * update deformable_conv op on xpu * update deformable_conv op on xpu * update kunlun conv2d/softmax/elementwise implemetation (#29229) * update conv2d & softmax to new xpu api * test=kunlun * remove useless comments * test=kunlun * remote softmax xpu op * test=kunlun * update kunlun softmax * test=kunlun * update xpu unitest * test=kunlun * fix elementwise_grad bug for kunlun *test=kunlun * support global pooling for kunlun (#29293) * test=kunlun * update reduce_sum op on xpu (#29367) * update reduce_sum op on xpu * update reduce_sum op on xpu * support running on xpu * fix expand/uniform_random && concat/transpose to new api on xpu (#29280) * fix expand && concat/transpose to new api * update uniform_random_op * update xpu_header * 1. fix elementwise ops'bug 2. fix softmax_with_cross_entropy_op 3. add biliner_interp_op (#29448) Co-authored-by: Nroot <root@bjhw-sys-rpm0223.bjhw.baidu.com> Co-authored-by: N卖鱼的哲学 <tangzhiyi11@users.noreply.github.com> Co-authored-by: NQingshuChen <qingshu.chen714@gmail.com> Co-authored-by: Ntaixiurong <taixiurong@126.com> Co-authored-by: Nroot <root@bjhw-sys-rpm0223.bjhw.baidu.com>
-
由 Leo Chen 提交于
-
由 Zhang Ting 提交于
-
- 07 12月, 2020 2 次提交
-
-
由 wangchaochaohu 提交于
-
由 tangwei12 提交于
* fix gpu emb out of range Change-Id: I5794ac73bd634d5ea069a6fbbd914274b6d6b7bf * fix doc Change-Id: I5a3350b2930a9ab2f52116c192b087307faf8fdf
-
- 05 12月, 2020 1 次提交
-
-
由 chentianyu03 提交于
* fix random failed of complex matmul * Make transpose, trace, kron, reshape, sum op support complex type (#29321) * add complex64 and complex128 type; add +-*/@ and slice opreator for complex types * add test cases for complex elementwise, matmul and getitem unittest * add test cases for complex types * add test cases for complex matmul unittest * kron, reshape, transpose support complex types * sum and trace op support complex types * add test case of sum and trace op * fix the bug of imag part of complex not initialized * format file * format code style * kron support type promotion; modify test cases
-
- 04 12月, 2020 2 次提交
-
-
由 Shang Zhizhou 提交于
* fix tensorrt output shape error * fix unittest tensorrt_engine_op_test * fix code style for unitest
-
由 Chen Weihang 提交于
* basic impl of type promote * add comment & another testcase * fix complex bugs & support python op promote type * fix failed unittests & polish code * add unittest for coverage * change to only promote complex type * polish code details * polish several comments
-
- 03 12月, 2020 2 次提交
-
-
由 Leo Chen 提交于
-
由 Zhen Wang 提交于
* Add pure fp16 training with master weights. (#27712) * add the weight decay func for the momentum op * Add the multi_precision function in Momentum Optimizer. * Make sure that the initial value of master weights are same with the fp16 weights. * add static loss scaling. * add the rescale_grad function in the pure fp16 training. * use the original momentum updating method. * Polish some codes, such as variable names. * add docstring for apis. * update the var creation details of _create_master_weight. * not modify codes about imperative momentum updating. * Fix the error of test_dist_sparse_tensor_load_momentum UT. * add unit test for multi precision fp16 training. * add more unit tests for CI. * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
-
- 01 12月, 2020 2 次提交
-
-
由 chentianyu03 提交于
* add complex64 and complex128 type; add +-*/@ and slice opreator for complex types * add test cases for complex elementwise, matmul and getitem unittest * add test cases for complex types * add test cases for complex matmul unittest
-
由 Wilber 提交于
-
- 30 11月, 2020 5 次提交
-
-
由 Adam Osewski 提交于
- Make sure that oneDNN memory descriptors are created only once at first iteration.
-
由 123malin 提交于
* fix paramete prefetch & device guard Co-authored-by: NMrChengmo <cmchengmo@163.com> Co-authored-by: Nchengmo <chengmo@baidu.com>
-
由 123malin 提交于
* test=develop, optimize async prefetch
-
由 WangXi 提交于
-
由 Jack Zhou 提交于
fix gru gcc7.4 bug for the gru compile
-
- 28 11月, 2020 1 次提交
-
-
由 wangchaochaohu 提交于
-
- 27 11月, 2020 4 次提交
-
-
由 lilong12 提交于
update expand as op to use the shape of the target tensor instead of the target tensor itself. (#29020) * update, test=develop
-
由 Jack Zhou 提交于
Add eigen gru and fix the dropout bug in the rnn
-
由 arlesniak 提交于
-
由 Shang Zhizhou 提交于
* remove -DSUPPORTS_CUDA_FP16 in cuda.cmake * comile with cuda9 * add some unittest * notest;test=coverage * add unittest for trt plugin swish && split * update ernie unittest * fix some error message * remove repeated judgement of CUDA version in mbEltwiseLayerNormOpConverter * fix comile errror when CUDA_ARCH_NAME < Pascal" * fix comile error * update unittest timeout * compile with cuda9 * update error msg * fix code style * add some comments * add define IF_CUDA_ARCH_SUPPORT_FP16 * rename IF_CUDA_ARCH_SUPPORT_FP16 to CUDA_ARCH_FP16_SUPPORTED
-
- 26 11月, 2020 2 次提交
-
-
由 Noel 提交于
Fix ops doc for some ops
-
由 joanna.wozna.intel 提交于
* Add bf16 pool2d and unify bf16 unit tests * Add change default ops test
-
- 25 11月, 2020 4 次提交
-
-
由 joejiong 提交于
add uint8 for reshape operator
-
由 taixiurong 提交于
-
由 joejiong 提交于
Simple code clean up
-
由 wawltor 提交于
remove eigen threadpool for the speed up
-
- 24 11月, 2020 2 次提交
- 23 11月, 2020 3 次提交
-
-
由 furnace 提交于
* refactor momentum op to combine weight_decay (scale op and sum op)
-
由 Jacek Czaja 提交于
-
由 yaoxuefeng 提交于
-
- 20 11月, 2020 2 次提交
-
-
由 gongweibao 提交于
-
由 Chen Weihang 提交于
-