- 09 6月, 2022 6 次提交
-
-
由 zhaoyingli 提交于
-
由 Aurelius84 提交于
[Dy2Stat]Add apply_optimization for @to_static and remove BreakTransformOptimizer by default (#43320) * [Dy2Stat]Add apply_optimization for @to_static and remove BreakTransformOptimizer by default * fix unittest
-
由 cambriconhsq 提交于
-
由 crystal 提交于
Co-authored-by: NLiu Yiqun <liuyiqun01@baidu.com>
-
由 Aurelius84 提交于
* [Dy2stat]Add RollBack into original dygraph function for @to_static * fix unittest * [Dy2Stat]Support deepcopy Net instance after @to_static
-
由 Weilong Wu 提交于
-
- 08 6月, 2022 10 次提交
-
-
由 zhouweiwei2014 提交于
-
由 zhaoyingli 提交于
* add fetch_list * fix evaluate log * tiny fix
-
由 Asthestarsfalll 提交于
* add paddle.optimizer.lr.CyclicLR * add unittest of CyclicLR * fix code format * fix bug * try * fix CI-Coverage * fix ValueError * fix arguments assgin * fix code format and retry pulling develop to pass ci * fix typo * Refactor * fix function-redefined in test_lr_scheduler.py * update * fix conflict * update * gamma->exp_gamma * polish docs * fix code-style * adjust code format again * change format of __all__ in lr.py
-
由 Netpunk 提交于
【Hackathon No.17】为 Paddle 新增 paddle.nn.CosineEmbeddingLoss 和 paddle.nn.functional.cosine_embedding_loss API (#41680) * add cosine embedding loss API * new version * new version * new version * set label to int32 * new version * new version-test * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * new version * aligning to Chinese document * add name parameter * activate CI * fix format error * unit test code format * format code
-
由 Aurelius84 提交于
-
由 YuanRisheng 提交于
* move_group_norm * move group norm backward * fix code format * modify code according comment
-
由 fwenguang 提交于
-
由 Aurelius84 提交于
* [Dy2stat]Add RollBack into original dygraph function for @to_static * fix unittest
-
由 Yiqun Liu 提交于
* Polish codes and memory usage for fused_gate_attention. * Fix wrong reduce_dims in fused_gate_attention when computing gradient of nonbatched_bias.
-
由 Qi Li 提交于
-
- 07 6月, 2022 12 次提交
-
-
由 Weilong Wu 提交于
-
由 Guanghua Yu 提交于
-
由 joanna.wozna.intel 提交于
-
由 sneaxiy 提交于
* add use_master_acc_grad * add ut
-
由 Yuang Liu 提交于
-
由 Haohongxiang 提交于
* fix bugs of reducer * update * update
-
由 qipengh 提交于
* [MLU]support cast double type * [MLU]fix cast test
-
由 BrilliantYuKaimin 提交于
* Update creation.py * Update search.py * Update search.py * Update xavier.py * Update xavier.py * Update pooling.py * Update pooling.py * Update pooling.py * Update search.py
-
由 Weilong Wu 提交于
-
由 Guoxia Wang 提交于
fix the unittest bug of none grad of margin_cross_entropy when FLAGS_retain_grad_for_all_tensor change default setting (#43241)
-
由 wangguanzhong 提交于
-
由 niuliling123 提交于
-
- 06 6月, 2022 3 次提交
-
-
由 zhaoyingli 提交于
* fix gradient merge * bug fix * update annotation
-
由 SmirnovKol 提交于
-
由 Chen Weihang 提交于
-
- 05 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
* use yapf to format all python file * yapf exclude two unittests file for they rely on writing and reading file, and format will break them * disable diff_py_file because too many diff files cause command following failed
-
- 04 6月, 2022 1 次提交
-
-
由 Sing_chan 提交于
-
- 02 6月, 2022 7 次提交
-
-
由 Leo Guo 提交于
* Add generate_proposals_v2 op and unittest for kunlun. *test=kunlun * Add the assign op to xpu2_op_list and expand the function of gather op. Add the unit-test of generate_proposals_v2. *test=kunlun
-
由 Weilong Wu 提交于
* [Eager] FLAGS_retain_grad set false * Add FLAGS_retain_grad_ for some tests * Add FLAGS_retain_grad_ to some tests * modified set_flags * modified set_flags * fix windows-ci and windows-openblas-ci * import paddle.fluid
-
由 Haohongxiang 提交于
-
由 Fan Zhang 提交于
* Adapt XPUPS - 1st version - 3.24 * Adapt XPUPS - update XPU PushSparse - 2nd version - 3.24 * Adapt XPUPS - add XPU PullSparseOp - 3nd version - 3.25 * refactor heter comm kernel * update. test=develop * Adapt XPUPS - modify by compilation - 4th version - 3.27 * update calc_shard_offset. test=develop * update xpu kernel. test=develop * update args of calc_shard_offset * update. test=develop * remove customGradMerger * update. test=develop * heter_comm update * heter_comm update * update calc_shard_offset. test=develop * heter_comm update * update args of calc_shard_offset * update. test=develop * remove customGradMerger * update. test=develop * fix. test=develop * update. test=develop * update. test=develop * update optimizer kernel * Adapt XPUPS - use WITH_XPU_KP and modify wrapper kernel function - 5th version - 3.30 * update. test=develop * update pslib.cmake * update. test=develop * update. test=develop * update. test=develop * update. test=develop * update. test=develop * Adapt XPUPS - modify by kp compilation - 6th version - 3.30 * update. test=develop * update. test=develop * update. test=develop * update optimizer kernel * update. test=develop * update. test=develop * update. test=develop * update. test=develop * update. test=develop * update. test=develop * update. test=develop * update. test=develop * fix. test=develop * fix. test=develop * used by minxu * update heter_comm_inl * fix. test=develop * Adapt XPUPS - modify by kp compilation - 7th version - 3.30 * fix. test=develop * add optimizer kernel. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * 3.31 update * Adapt XPUPS - update kp compilation path - 8th version - 3.31 * add optimizer kernel. test=develop * fix kunlun not support size_t. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix kunlun not support size_t. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * update heter_comm_kernel.kps 3.31 * fix. test=develop * fix. test=develop * update heter_comm_kernel.kps 3.31 * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * update heter_comm.h 3.31 * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * update hashtable. test=develop * update. test=develop * Adapt XPUPS - update by kp compilation - 9th version - 4.1 * update hashtable. test=develop * fix. test=develop * update hashtable 4.1 * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * Adapt XPUPS - update by kp compilation - 10th version - 4.1 * fix. test=develop * fix. test=develop * fix. test=develop * update. test=develop * modify by compilation 4.1 * update. test=develop * update. test=develop * fix. test=develop * modify by compilation 4.1 * update. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * modify by compilation 4.1 * fix. test=develop * fix. test=develop * fix. test=develop * modify by compilation 4.1 19:30 * fix. test=develop * update ps_gpu_wrapper.kps 4.1 * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * Adapt XPUPS - update by kp compilation - 11th version - 4.1 * fix. test=develop * Adapt XPUPS - update by kp compilation - 12nd version - 4.2 * fix. test=develop * fix. test=develop * modify by compilation 4.2 * 4.2 update * fix. test=develop * template init. test=develop * update 4.6 * fix. test=develop * template init. test=develop * 4.6 modify by compilation * hashtable template init. test=develop * hashtable template init. test=develop * fix. test=develop * fix. test=develop * fix. test=devlop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=devlop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * Adapt XPUPS - update by kp compilation - 13nd version - 4.7 * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * 4.11 update * fix. test=develop * fix. test=develop * 4.11 update * update by pre-commit * fix. test=develop * fix. test=develop * fix. test=develop * fix. test=develop * 4.12 update * fix. test=develop * Adapt XPUPS - update by kp compilation - 14th version - 4.13 * 4.13 update * 4.14 update * 4.14 update * 4.14 update * 4.14 modify by merged latest compilation * retry CI 4.14 * 4.15 pass static check * 4.15 modify by gpups CI * 3.16 update by gpups CI - modify ps_gpu_wrapper.h * 4.16 update * 4.16 pass xpu compile * 4.16 retry CI * 4.16 update * Adapt XPUPS - adapt BKCL comm for XPUPS - 4.24 * update by compilation * Adapt XPUPS - register PSGPUTrainer for XPUPS - 4.25 * update device_worker_factory * Adapt XPUPS - split heter_ps into .cu and .cc - 4.27 * Adapt XPUPS - register pull_box_sparse op under XPU_KP - 4.28 * update * 5.7 modify ps_gpu_wrapper pull_sparse * 5.11 update ps_gpu_wrapper CopyKeysKernel * 5.13 modify calc_shard_offset_kernel & fill_shard_key_kernel * modify fill_dvals_kernel & PullCopy & c_sync_calc_stream - 5.18 * modify PushCopy & fill_shard_grads_kernel & register push_box_sparse - 5.19 * Adapt XPUPS - modify BKCL comm op register - 5.26 * Adapt XPUPS - modify BKCL comm op register - 5.27 * Adapt XPUPS - modify BKCL comm op register - 5.27v2 * Adapt XPUPS - modify BKCL comm op register - 5.27v3 * Adapt XPUPS - modify c_comm_init_all_op to adapt BKCL init - 5.30 * Adapt XPUPS - modify c_comm_init_all_op to adapt BKCL init v2 - 5.30 * Adapt XPUPS - modify c_comm_init_all_op to adapt BKCL init v3 - 5.30 * Adapt XPUPS - modify c_comm_init_all_op to adapt BKCL init v4 - 5.31 Co-authored-by: Nzmxdream <zhangminxu01@baidu.com>
-
由 光明和真理 提交于
Co-authored-by: Nliupeiyu <liupeiyu@cambricon.com>
-
由 ziyoujiyi 提交于
* back fl * delete ssl cert * . * make warning * . * unittest paral degree * solve unittest * heter & multi cloud commm ready * . * . * fl-ps v1.0 * . * support N + N mode * . * . * . * . * delete print * . * . * . * .
-
由 Wangzheee 提交于
* new general transformer inference support
-