- 12 10月, 2021 15 次提交
-
-
由 Qi Li 提交于
* [NPU] fix elementwise_mul to support broadcast, test=develop * remove debug files, test=develop * add axis support, test=develop
-
由 Zeng Jinle 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 Qi Li 提交于
* [NPU] add int64 kernel for scale and slice, test=develop * remove int64 for scale, test=develop
-
由 JingZhuangzhuang 提交于
-
由 Haohongxiang 提交于
* fix calling bug of HybridParallelClipGrad * fix bugs of HybridParallelClipGrad * add unittest of pp with HybridParallelClipGrad * fix bugs in mp_layers.py * update * fix bugs in pp_layers.py * update
-
由 LJQ❤️ 提交于
-
由 Aurelius84 提交于
* Fix stop_gradient in RunProgramOp * fix reference
-
- 11 10月, 2021 17 次提交
-
-
由 danleifeng 提交于
* heterps:add fuse_allreduce op; test=develop * add program_mode in minimize for pslib mode;test=develop
-
由 jakpiase 提交于
-
由 Zeng Jinle 提交于
* add FLAGS_allreduce_record_one_event * add more comments * fix ut * improve coverage * fix ut, improve coverage
-
由 Liu-xiandong 提交于
Add paddle.nn.functional.sparse_attention API 本个PR主要将sparse_attention功能在python层进行了一层封装,OP的主体代码见:#PR35676 此外,对于封装的python 接口,增加了相应的单测。
-
由 Yuang Liu 提交于
-
由 zlsh80826 提交于
Sparse tensor core for convolution requires the input channel dimension is 2:4 structed sparse. So we have to mask the input channel dimension for using sparse tensor core
-
由 caozhou 提交于
* add reshard module * fix conflict * update reshard module * update and add unitest * update reshard module and unitest * add more unitests
-
由 yaoxuefeng 提交于
-
由 wangxinxin08 提交于
* enhance yolobox plugin
-
由 Qi Li 提交于
* [NPU] fix matmul_v2 and utils.run_check, test=develop * remove debug files, test=develop * fix install_check, test=develop * fix doc, test=develop * fix review comments, test=develop
-
由 Qi Li 提交于
* [NPU] fix set_value, test=develop * fix typo, test=develop * fix typo, test=develop
-
由 Feiyu Chan 提交于
fix: `-1` is used when fft's axis is `0`
-
由 李季 提交于
-
由 wangxinxin08 提交于
* add mish trt plugin, compile & install success, run error. test=develop * modify code according to review * add TRT_NOEXCEPT for mish trt plugin * add unittest for mish trt plugin * remove unnecessary check of mish in op_teller.cc * fix some problem of trt8 * add check and modify unittest while converting mish to trt plugin Co-authored-by: Ndengkaipeng <dengkaipeng@baidu.com>
-
由 baoachun 提交于
* add skip case in trt converter ut * disable group_norm trt plugin
-
由 Huihuang Zheng 提交于
Add use_cinn flag and use it to control whether we run PaddlePaddle using CINN. Also add: Replace PaddlePaddle graph with a CINN graph in a pass PE Method to feed data and run the graph by CINN
-
由 JingZhuangzhuang 提交于
-
- 09 10月, 2021 6 次提交
-
-
由 Yiqun Liu 提交于
-
由 From00 提交于
* Add new API tensordot * Set timeout value 400 for UT; Fix format for EN docs * Set timeout value 1000 for UT; Fix format for EN docs * Remove some input check * Coding style improve: don't compare boolean values to True or False using ==
-
由 zhiboniu 提交于
-
由 zhiboniu 提交于
* update fft api path * add sample code for ihfft2 Co-authored-by: Nchenfeiyu <chenfeiyu@baidu.com>
-
由 zhaoyingli 提交于
* support ClipGradByGlobalNorm in sharding * support ClipGradByGlobalNorm in sharding * test=allcase
-
由 wuhuanzhou 提交于
对于__getattr__重载后不满足条件的参数,全部抛出AttributeError异常,达到与未重载版本一致。
-
- 08 10月, 2021 2 次提交
-
-
由 Zeng Jinle 提交于
* support CUDA Graph on PE * add ut, fix CI compile * reduce memory consumption * fix CUDA 10 CI * improve coverage * improve python coverage
-
由 yaoxuefeng 提交于
-