- 16 3月, 2023 8 次提交
-
-
由 JZ-LIANG 提交于
* update env setting * update pass logic * dist op support bf16 * backward cast update * update setting * update backward * revert amp pass * update fp16 backward logic * register c_embedding bf16 * revert engine * add unitest * add unitest * update unitest * update cmake * update math * update math.py * update unitest * update unitest * revise unitest * revise unitest * update unitest * update unitest * update unitest
-
由 wenbin 提交于
* split pass * fix compile * fix ut * more time * modify ut * reduce dim * fix compile * reshape weight * tensor * remove enforce * static shape ut * batchsize * reorder pass * minus test cases * windows timeout * windows time out * remove test for windows * correct * sssss * xxx
-
由 kangguangli 提交于
-
由 kangguangli 提交于
* rm Executor._run_parallel * remove compiledProgram related tests of standaloneExecutor
-
由 kangguangli 提交于
-
由 wanghuancoder 提交于
* delete old dygraph op test
-
由 Nyakku Shigure 提交于
* [CodeStyle] initial ruff config * update F401 config * [CodeStyle][NPY001] replace numpy deprecated type alias
-
由 xjmxyt 提交于
* add index select op * add to op teller * add trt version control * delete useless code
-
- 15 3月, 2023 15 次提交
-
-
由 SylarTiaNII 提交于
* add assign composite backward op * fix log msg * code style * fix comp rule * replace assign with by_pass
-
由 Jiabin Yang 提交于
* [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes * add unittest * fix typo * fix typo * fix map.at * fix find * fix test * fix cinn cache key structure realize * using ordered map for attributes * add test by review advice --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * Cxx prim custom vjp (#8) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * [dy2static-ci] fix dy2static ci errors. --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [Prim] enable whitelist and blacklist for custom_vjp * support softmax grad * remove additional code * add test back --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> Co-authored-by: Nxiongkun <807377414@qq.com>
-
由 Leo Chen 提交于
-
由 Leo Chen 提交于
* support set_default_dtype bf16 * support float
-
由 Kang Zhao 提交于
* feat: add relu composite rule * feat: add relu composite rule, maximum op * feat: add relu composite rule, maximum op * feat: add relu composite rule, polish comments * feat: add relu composite rule, polish comments * feat: add relu composite rule, add python api of relu * feat: add relu composite rule, commit hook * fix: maximum type error & ban cinn test * fix: maximum input sequence bugs * resolve conflicts * fix: code style bugs * add: relu fp16 test * feat: add rsqrt composite rule * feat: add rsqrt composite rule * resolve conflicts of composite rule * fix: delete check eager
-
由 kangguangli 提交于
* remove unit tests about GraphExecutionOptimizer * remove test file
-
由 kangguangli 提交于
* remove parallel_executor related unit tests * fix CI
-
由 wangxiaoning 提交于
-
由 Jiabin Yang 提交于
* support amp logic for layer_norm and softmax * fix layer_norm amp * fix layernorm api and dropout fp16 * fix layernorm api and dropout fp16 * fix bn, ln dtype in float16 * fix dropout fp16 * fix comment
-
由 Weilong Wu 提交于
* support gather test on prim and cinn * reset timeout for gather
-
由 chenjian 提交于
* add pow composite rule * fix test * fix unit test * update test * fix test * update
-
由 Yuang Liu 提交于
-
由 Siming Dai 提交于
* add fp16 test for divide, matmul, pnorm * add cumsum fp16 unittest * fix threshold * revert cumsum * fix code-style * fix according to review * fix kernel not found
-
由 Guoxia Wang 提交于
-
由 zhangyuqin1998 提交于
* Delete hardswish_raw op * fix ut
-
- 14 3月, 2023 17 次提交
-
-
由 Vvsmile 提交于
-
由 zhouweiwei2014 提交于
-
由 ccrrong 提交于
* add split_with_num composite rule * add split_with_num composite rule * add split composite rule * update * update test * update test * delete split_with_num_grad
-
由 qizhaoaoe 提交于
-
由 limingshu 提交于
* first commit * fix code bugs in for_loop * fix bugs in cuLoadAddStridedInputs. * optimization for LayerNormBackwardComputeGradInput * add unitest for validating the optimization * fix windows ci error
-
由 gouzil 提交于
-
由 pangyoki 提交于
* cuda graph support multi-stream for new executor * fix windows compile error * delete create_cuda_graph_stream
-
由 zhaoyingli 提交于
-
由 YuhangLi 提交于
* wisemax fp16 support * add bf16 support 4 elementwise_max * append broadcast 4 op 4 fp16 / bf16 * fix elewise_max ut bf16 numeric delta * append fp/bf16 uts * add fp/bf16 uts * change bf16 uts delta * fix some issue * add prim 4 fp16
-
由 wangxiaoning 提交于
-
由 wenbin 提交于
-
由 Wang Bojun 提交于
* fix conv2d filter
-
由 zhiboniu 提交于
* add fp16 and bf16 test * update
-
由 cxxly 提交于
-
由 xiongkun 提交于
* [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * Pr 50885 (#7) * [CINN]Enhance CacheKey hash logic by considering input dtypes (#50557) * [CINN]Enhance CacheKey hash logic by considering input dtypes --------- Co-authored-by: Njiangcheng <thisjiang@qq.com> * [prim] enable dygraph_to_static to support custom_vjp * fix code in a dy2static-friendly way. * [dystatic] add hooker for prim --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com> * [prim] enable dygraph_to_static to support custom_vjp * fix cast prim and vjp dtype mapping error bug * [dy2static-ci] fix dy2static ci errors. --------- Co-authored-by: NAurelius84 <zhangliujie@baidu.com> Co-authored-by: Njiangcheng <thisjiang@qq.com> Co-authored-by: Ncxxly <chenxx_id@163.com>
-
由 cxxly 提交于
-
由 cxxly 提交于
-