- 01 12月, 2022 8 次提交
-
-
由 minghaoBD 提交于
* fuse-mt passes compatible with structured pruning
-
由 201716010711 提交于
-
由 HongyuJia 提交于
* fix typo error * pass CI-coverage
-
由 Matsumoto Ruko 提交于
-
由 risemeup1 提交于
-
由 zhangyikun02 提交于
-
由 Nyakku Shigure 提交于
-
由 Jianghai 提交于
* c_embedding * add annotations * add annotations * revision * revise attrs
-
- 30 11月, 2022 26 次提交
-
-
由 HongyuJia 提交于
-
由 wanghuancoder 提交于
-
由 Qi Li 提交于
-
由 cyber-pioneer 提交于
-
由 zyfncg 提交于
* fix error log for yaml check * remove grad_op of increment
-
由 Netpunk 提交于
* migrate transpose_op.cu.h and gpu_utils.h * format code style * fix some problems * format code * reset tranpose_op.cc * test commit * recover transpose_op.h * delete transpose_op.h * adjust header files order in transpose_op.cc
-
由 ccrrong 提交于
-
由 wasupandceacar 提交于
-
由 ShenLiang 提交于
* fix bug of pylayer * fix bug
-
由 Vvsmile 提交于
* replace log with paddle.log * replace log with paddle.nn.functional.log * fix the code style of remove_log * fix the ImportError of log * fix the error of modification of the dist_transformer.py * fix error of Static-Check
-
由 yunyaoXYY 提交于
[Clean fluid] Clean ones, reverse, save, save_combine, load_combine, has_inf, zeros_like and ones_like (#48424) * Clean fluid ones * clean ones_like * clean zeros_like * clean save,save_combine,load_combine * clean reverse * clean has_inf * clean reverse tests
-
由 wanghuancoder 提交于
-
由 Aurelius84 提交于
* [Perf]Fix interploate OutSize data transform problem * fix code style * fix grad * fix phi kernel
-
由 MarDino 提交于
* add activation support * fix cublasLt bug * remove useless code and fix test random range
-
由 feng_shuai 提交于
-
由 Yuanle Liu 提交于
-
由 zhangbo9674 提交于
* add fuse act add grad pass * polish code * refine code * add test * refine code
-
由 zyfncg 提交于
* rename some kernel name * fix compile problem
-
由 zyfncg 提交于
* fix bug of eigen_dependency * fix xpu compile
-
由 Roc 提交于
* fix recompute for stop_gradient and inpalce * fix ut * update
-
由 RichardWooSJTU 提交于
* delete unnecessary shape and slice op Co-authored-by: NYour Name <you@example.com>
-
由 yuehuayingxueluo 提交于
* clear fluid api: sigmoid_cross_entropy_with_logits * fix loss.py * change paddle.nn.functional.sigmoid_cross_entropy_with_logits * delete sigmoid_cross_entropy_with_logits * fix binary_cross_entropy_with_logits * fix ci bug * fix ci buf
-
由 Aurelius84 提交于
* [Fluid clean]Migrate base/call/print et.al transformer into paddle.jit * fix phi kernel * Revert "fix phi kernel" This reverts commit eff8891c7efda6e49799edbcaef2ca50379d50ef.
-
由 james 提交于
some legacy code still use xpu_wait() for stream sync -- it only syncs default stream. this PR replaces them with dev_ctx.Wait() to ensure that correct stream is always used
-
由 zhangyikun02 提交于
-
由 wangzhen38 提交于
* [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets * [remove fluid] under unittesets
-
- 29 11月, 2022 6 次提交
-
-
由 lzy 提交于
* fix mma_tensorcore (__CUDA_ARCH__) * disable tensorcore by default. disable tensorcore by default, because the judgment of __CUDA_ARCH__ will cause undefined behavior in some environments, can manually enable it on a machine that supports tensorcore.
-
由 HongyuJia 提交于
-
由 HongyuJia 提交于
-
由 HongyuJia 提交于
-
由 Nyakku Shigure 提交于
-
由 Paulina Gacek 提交于
* traspose2 kernel migrated * Got rid of mutable_data * x modification added * ops added in extra info file * Formatting fix * 2 fuse passes with tanpose2 commented * nr of outs changed in 2 passes, passes uncommented * Changes in passes reverted * transpose chnaged in operator.cc * MKLDNN check in operator.cc * Transpose fixes * Fix deleted from operato * template corrected Co-authored-by: NPaulina Gacek <paulinagacek@intel.com>
-