- 15 4月, 2022 1 次提交
-
-
由 Roc 提交于
* moe ref * ref commit; test=document_fix * update; test=document_fix * update test=document_fix
-
- 30 3月, 2022 1 次提交
-
-
由 Roc 提交于
* add random routing op add _random_routing api in utils add random routing ut * # This is a combination of 10 commits. # The first commit's message is: add expert count op add ut for expert_count # This is the 2nd commit message: update UT only for cuda # This is the 3rd commit message: fix for rocm # This is the 4th commit message: update ut # This is the 5th commit message: add moe module # This is the 6th commit message: add expert count op add ut for expert_count # This is the 7th commit message: update UT only for cuda # This is the 8th commit message: update ut # This is the 9th commit message: add moe module # This is the 10th commit message: make expert count private * add assign pos op * fix upper num name * add api _assign pos * add ut for assign pos op * update date * add op about moe gate update utils add limit by capacity op add ut for limit_by_capacity add ut for prune_gate_by_capacity add ut for limit_by_capacity add ut for prune_gate_by_capacity * fix for win * fix bugs in test_limit_by_capacity_op * update ut * update for test (timeout) * fix ut * update * update(fix) ut for win * moe apis in incubate * # This is a combination of 10 commits. # The first commit's message is: add expert count op add ut for expert_count # This is the 2nd commit message: update UT only for cuda # This is the 3rd commit message: fix for rocm # This is the 4th commit message: update ut # This is the 5th commit message: add moe module # This is the 6th commit message: add expert count op add ut for expert_count # This is the 7th commit message: update UT only for cuda # This is the 8th commit message: update ut # This is the 9th commit message: add moe module # This is the 10th commit message: make expert count private * add assign pos op * fix upper num name * add api _assign pos * add ut for assign pos op * update date * fix for win * update for test (timeout) * fix ut * update * fix ut for number count * add apis and utils * add gate apis * add moe and grad clip apis * update moe apis * add ops for moe gate * fix * update for base moe layer api * add random routing op add _random_routing api in utils add random routing ut * fix for dygraph * update with ranodm routing * update * fix ut for limit by capacity * update * update limit by capacity for easily to switch to single thread mode * update api docs Co-authored-by: Nhlygit66666 <2570058140@qq.com>
-
- 29 3月, 2022 1 次提交
-
-
由 Roc 提交于
* add random routing op add _random_routing api in utils add random routing ut * # This is a combination of 10 commits. # The first commit's message is: add expert count op add ut for expert_count # This is the 2nd commit message: update UT only for cuda # This is the 3rd commit message: fix for rocm # This is the 4th commit message: update ut # This is the 5th commit message: add moe module # This is the 6th commit message: add expert count op add ut for expert_count # This is the 7th commit message: update UT only for cuda # This is the 8th commit message: update ut # This is the 9th commit message: add moe module # This is the 10th commit message: make expert count private * add assign pos op * fix upper num name * add api _assign pos * add ut for assign pos op * update date * add op about moe gate update utils add limit by capacity op add ut for limit_by_capacity add ut for prune_gate_by_capacity add ut for limit_by_capacity add ut for prune_gate_by_capacity * fix for win * fix bugs in test_limit_by_capacity_op * update ut * update for test (timeout) * fix ut * update * update(fix) ut for win * moe apis in incubate * # This is a combination of 10 commits. # The first commit's message is: add expert count op add ut for expert_count # This is the 2nd commit message: update UT only for cuda # This is the 3rd commit message: fix for rocm # This is the 4th commit message: update ut # This is the 5th commit message: add moe module # This is the 6th commit message: add expert count op add ut for expert_count # This is the 7th commit message: update UT only for cuda # This is the 8th commit message: update ut # This is the 9th commit message: add moe module # This is the 10th commit message: make expert count private * add assign pos op * fix upper num name * add api _assign pos * add ut for assign pos op * update date * fix for win * update for test (timeout) * fix ut * update * fix ut for number count * add apis and utils * add gate apis * add moe and grad clip apis * update moe apis * add ops for moe gate * fix * update for base moe layer api * add random routing op add _random_routing api in utils add random routing ut * fix for dygraph * update with ranodm routing * update * fix ut for limit by capacity * update Co-authored-by: Nhlygit66666 <2570058140@qq.com>
-
- 25 4月, 2021 1 次提交
-
-
由 Wenyu 提交于
* add Hub Module for easy to use pre-trained models. * support list, load, help fucntions. * support load models by github, gitee, local Co-authored-by: NLielinJiang <jianglielin@baidu.com>
-
- 15 4月, 2021 1 次提交
-
-
由 WeiXin 提交于
* custom python backward * polish up the code * polish up the code * polish up the code. * Fix code format and comments. * Delete redundant files. * add unnittest. * edit unnittest. * edit unnittest. * Remove redundant header files. * Improve coverage and remove redundant code. * support saving for backward. * polish code according to comments. * Add support type for PyLayer. * Modify the DOC. * polish Doc. * polish Doc. * polish Doc. * polish Doc. * polish Doc. * polish Doc. * polish code and make the code robust. * Modify the code format.
-
- 01 4月, 2021 1 次提交
-
-
由 chentianyu03 提交于
* add custom init grad for backward function * add custom init grad for backward function * handle when the grad_tensor is none * handle when the grad_tensor is none * fix the args type error on windows platform * modify the args order and doc * format code * add grad_tensor to xpu * modify the grad_tensor type check * add paddle.backward api to support multi tensors gradient compute * add paddle.backward api to support multi tensors gradient compute * add paddle.atuograd module and backward api * change tensor.backward func args * modify tensor backward api * remove create_graph intputs args * add doc and examplex code for backward api * when have the same tensor, throw error * modify test Init func args * modify the execute.Init func args in test files * add paddle.autograd package in setup.py.in * modify error msg, remove _run_backward method in class Tensor * add test cases for backward api
-
- 26 11月, 2020 1 次提交
-
-
由 gongweibao 提交于
-
- 06 11月, 2020 1 次提交
-
-
由 iducn 提交于
-
- 29 10月, 2020 1 次提交
-
-
由 iducn 提交于
* 01:Modify the shell script according to the specification * 01:Modify the shell script according to the specification
-
- 13 10月, 2020 1 次提交
-
-
由 gongweibao 提交于
-
- 27 9月, 2019 1 次提交
-
-
由 mapingshuo 提交于
* add train demo to paddle_build.sh test=develop test=document_preview * code cleaning * test=develop test=develop * remove program desc files test=develop * add test_train to CI test=develop test=document_preview * make run.sh excutable test=develop test=document_preview * switch with_mkl to off test=develop test=document_preview * switch with_mkl to off test=develop * switch with_mkl to on test=develop * move test_fluid_lib_train to ci_py35 test=develop * fix bugs of MKLDNN building * rm build_train_lib(), test=develop * restore tensor.h, test=develop
-
- 26 7月, 2018 1 次提交
-
-
由 Luo Tao 提交于
-