- 03 3月, 2023 2 次提交
-
-
由 zxcd 提交于
* add sigmoid composite rule * add python api * fix code style. * add check_prim=True * add sigmoid fp16 unit test. * fix code style. * rm bf16 check_prim * fix code style.
-
由 Haohongxiang 提交于
-
- 02 3月, 2023 2 次提交
-
-
由 wangzhen38 提交于
-
由 Roc 提交于
* add composite op hard swish * add test grad * update apis calling * update date range * add ut * tune off cinn for 0-d shape * skip cinn
-
- 01 3月, 2023 4 次提交
-
-
由 Jiabin Yang 提交于
-
由 wangxiaoning 提交于
* remove transpiler * Revert "remove transpiler" This reverts commit 46044ccd52011d45d7026786d331f264a6a8f645. * Revert "Revert "remove transpiler"" This reverts commit 80ad0945401b5b5efebac4baee0ec50a793d4405. * codestyle * fix setup * fix * fix
-
由 TaoTao Li 提交于
-
由 Yichen Zhang 提交于
* implement composite full_like and simple unit test * implement op tests for composite full_like op * some modification as reviewers suggested add cinn op test to CMakeLists.txt fix code style * fix code style * modify input args of prim fill_any_like op * resolve conflicts * resolve conflicts * modify python api and unit tests as suggested * resolve conflicts * resolve conflicts * use framework.dtype to convert dtype in Op test
-
- 28 2月, 2023 3 次提交
-
-
由 iLeGend 提交于
-
由 zxcd 提交于
* add silu composite rule * fix code style. * add silu fp16 unit test.
-
由 xysheng-baidu 提交于
* Add flatten composite rule * get the right xshape and pass func test * add cinn unit test * Remove cinn test, wait for it to be added after repair * add comp test to test_flatten_contiguous_range_op.py * remove func test on composite_ops * Add comments to maybe_wrap_dim func * remove commented code * fix the problem with 0D tensor case * add flatten split rule comment * fix syntax issues * block flatten on resnet_prim_cinn * remove maybe_wrap_dim func * Use none instead od xshape
-
- 27 2月, 2023 2 次提交
-
-
由 wangzhen38 提交于
* [RM FLUID] rm parameter_server * [rm ps] for ci * [rm ps] for ci * [rm ps] for ci * [rm ps] for ci
-
由 wangzhen38 提交于
* [mv fleet] mv fleet to distributed * [mv fleet] for ci * [mv fleet] for ci * [mv fleet] solve ci of version
-
- 24 2月, 2023 4 次提交
-
-
由 Weilong Wu 提交于
* Revert "fixoptminizer _set_auxiliary_var bug (#50335)" This reverts commit c44005f0. * Revert "refine optimizer create accumulators (#50188)" This reverts commit 244e7546. * Revert "fix found_inf bug for custom optimizer (#50158)" This reverts commit 64573f9f. * Revert "refine amp scaler found_inf (#49864)" This reverts commit 382e9a06. * fix code format * fix conflict
-
由 Jiabin Yang 提交于
* change amp with to_prim * fix prim amp * fix rules * fix liear * add amp test * add test * disable this test on cpu * disable this test on cpu --------- Co-authored-by: Ncyber-pioneer <chenzhuo@tju.edu.cn>
-
由 cyber-pioneer 提交于
* fix attrs loss in creating op * add comment * add case * add case * remove unused case setting
-
由 Hui Zhang 提交于
-
- 22 2月, 2023 4 次提交
-
-
由 meteor135 提交于
-
由 Xiaoxu Chen 提交于
* map output from composite rule to origin op add mean layer_norm dropout op map add input map check composite softmax support input shape [] * polish log * [prim] add dropout composite rule --------- Co-authored-by: Ncyber-pioneer <chenzhuo@tju.edu.cn>
-
由 Shuangchi He 提交于
* Fix some typos. Signed-off-by: Yulv-git <yulvchi@qq.com> * pre-commit Signed-off-by: Yulv-git <yulvchi@qq.com> --------- Signed-off-by: Yulv-git <yulvchi@qq.com>
-
由 wangzhen38 提交于
-
- 21 2月, 2023 3 次提交
-
-
由 cyber-pioneer 提交于
* fix flatten op map * remove prim op all list * add op map info of full_like * polish code
-
由 wangzhen38 提交于
-
由 xiaoguoguo626807 提交于
* fix composite mean op map * fix composite check output * init layer_norm * init layer_norm * map output from composite rule to origin op * add dropout op map * add input map check * polish log * modify rules * success test_forward * modify test without cinn * modify cinn test * modify cinn test * except fp64 * except fp64 * delete flatten * delete unused change * review * pass cpu test * code style * delete flatten fp16 error * modify flatten test --------- Co-authored-by: Ncyber-pioneer <chenzhuo@tju.edu.cn>
-
- 20 2月, 2023 3 次提交
-
-
由 wangzhen38 提交于
* [RM FLUID] trainer_pass&heter_trainer_pass * [RM FLUID] rm distributed_strategy
-
由 cyber-pioneer 提交于
* check win * fix random error * fix manage
-
由 wangxiaoning 提交于
* move fluid.transpiler.details * fix setup * fix * fix setup * add setup
-
- 17 2月, 2023 1 次提交
-
-
由 wangzhen38 提交于
* [RM FLUID] rm fluid_pslib_init * [RM FLUID] for ci * [RM FLUID] for ci
-
- 16 2月, 2023 2 次提交
-
-
由 shentanyue 提交于
* support xpu multi-card infer * add ut * clean code * clean code * fix * fix * fix * fix
-
由 zqw_1997 提交于
* beta * small commit * add batch_norm composite rule move composite test case remove unuseful var add composite op blacklist * small change v2 * finish the test_composite_mean and test_composite_mean_grad * add ops assertion to the tests * add cinn test * fix the error and inappropriate usage in func: mean_composite * remove the ref of outer lib in primtives.py * modify sample code of reduce_sum * fix composite mean op map * modify testcases to test more float type * remove cpu float16 test * cinn test fix * remove reduce_max * change the name sum to sum_x * change the use of reduce_sum to sum --------- Co-authored-by: Ncyber-pioneer <chenzhuo@tju.edu.cn>
-
- 15 2月, 2023 4 次提交
-
-
由 wangzhen38 提交于
-
由 cyber-pioneer 提交于
* map output from composite rule to origin op add mean layer_norm dropout op map add input map check composite softmax support input shape [] * composite softmax support shape [] * polish log * solve conflict * polish code * polish op map output * add check dtype
-
由 lzy 提交于
* make FusedMultiTransformer supports variable-lengths. * modify ffn2 when cuda_version >= 11.6 because of #49392. * code style * delete remove_padding
-
由 wangzhen38 提交于
-
- 14 2月, 2023 2 次提交
-
-
由 mhy-666 提交于
-
由 GGBond8488 提交于
* add gelu composite rule * use full replace fill_constant * change the form of calculation * remove float16 test for composite gelu * reformate code * remove float16 test case * add forwad with prim and backward without prim test * add float16 test for composite gelu and add high dims test * add float16 test case and high dims test * shield float16 and cpu test case * increase train step to 10 in test cinn prim gelu * replace pow to multiply
-
- 09 2月, 2023 2 次提交
-
-
由 Jiabin Yang 提交于
-
由 wanghuancoder 提交于
-
- 08 2月, 2023 1 次提交
-
-
由 zmxdream 提交于
* hidden unzip * fix * fix
-
- 07 2月, 2023 1 次提交
-
-
由 cyber-pioneer 提交于
move composite test case remove unuseful var add composite op blacklist
-