- 26 5月, 2022 1 次提交
-
-
由 Sing_chan 提交于
cherry-pick PR #42777
-
- 07 5月, 2022 1 次提交
-
-
由 Ruibiao Chen 提交于
* Reduce time variation for cuda_managed_memory_test (#42458) * Disable standalone executor for test_tensordot (#42476)
-
- 29 4月, 2022 1 次提交
-
-
由 WangXi 提交于
[cherry-pick 2.3] Add fused_multi_transformer op to optimize transformer generation performance (#42311) * Add fused_multi_transformer op to optimize transformer generation performance (#41814) * fix fused_multi_transformer compile failed in cuda arch < sm53 (#42315) * fix ci timeout
-
- 28 4月, 2022 1 次提交
-
-
由 xiongkun 提交于
* full api fix * when out is None, go old dygraph mode * by static check * first version: support 2-inputs forwards. TODO: 1. backward 2. BroadCast 3. MultiVariable * time out -> 120
-
- 22 4月, 2022 1 次提交
-
-
由 Baibaifan 提交于
* sharding_for_eager_tensor (#41415) * fix_sharding_copy_right (#41849)
-
- 21 4月, 2022 1 次提交
-
-
由 ShenLiang 提交于
* fix utest * fix time
-
- 14 4月, 2022 1 次提交
-
-
由 Ruibiao Chen 提交于
* Add yaml for matrix rank op (#41466) * modify matrix_rank * add matrix_rank shape * add matrix_rank shape * Add yaml for matrix_rank OP * Add UT Co-authored-by: Nzhoujianqian <15205085056@163.com> * Add yaml for eye OP (#41476) * [cherry-pick] Add yaml config for matrix_rank, eye, deformable_conv and deformable_conv_v1 OPs * Add yaml for deformable_conv and deformable_conv_v1 OPs * Add UT * Add to skipped_phi_api list for infrt Co-authored-by: Nzhoujianqian <15205085056@163.com>
-
- 08 4月, 2022 1 次提交
-
-
由 0x45f 提交于
* Use `self`as a parameter of _hash_with_id function to avoid error caused by hash_id reuse (#41200) * Add fill_constant_batch_size YAML and UT (#41474) * Switch some dy2st UT to eager mode (#41382) * Sitch some dy2st UT to eager mode * Fix test_lstm and remove test_transformer * Run test_resnet_v2 in old dy mode
-
- 06 4月, 2022 1 次提交
-
-
由 Weilong Wu 提交于
* [Eager] Support test_layers's test cases switch to eager mode * Update batch_norm _C_ops action to fix CI * Use None instead of new EmptyTensor * Updated var name * Make sure to switch eager mode, Fix Coverage_CI * Remove _non_static_mode statement * Remove batch_norm dispensable input statement * Polish batch_norm code * Fix CI issue
-
- 05 4月, 2022 4 次提交
-
-
由 Haohongxiang 提交于
* support process group in dp with fleet api * update * fix uts * update
-
由 RichardWooSJTU 提交于
* add nms op and batched_nms api
-
由 Leo Chen 提交于
* enable new executor by default * enable stream safe allocator * test=document_fix;test=coverage * do not use scope in op kernel * fit empty program for new executor * fix communication depend * fix test_sync_batch_norm * skip unsupported place * refine datatransfer * fit for dirtributed program * fix dependencpy * fix some ut
-
由 Chen Weihang 提交于
-
- 04 4月, 2022 3 次提交
-
-
由 Haohongxiang 提交于
* [Dygraph] Support sparse tensor in refactored reducer * add uts * refactor * update * fix bugs
-
由 Chen Weihang 提交于
* add infershape and forward yaml * add final_state call * add base unittests * add backward yaml and test * fix without softmax test error * add cross_entropy test
-
由 From00 提交于
* Add yaml for reduce_sum OP * Fix CI errors * Fix CI errors * Fix CI errors * Fix CI errors
-
- 02 4月, 2022 1 次提交
-
-
由 lilong12 提交于
-
- 30 3月, 2022 2 次提交
-
-
由 From00 提交于
Add new APIs for GPU memory monitoring (max_memory_allocated, max_memory_reserved, memory_allocated, memory_reserved) (#38657) * Add new API memory_reserved * Add memory_allocated, max_memory_reserved and max_memory_allocater * Fix CI error * Fix CI error * Enhance UT * Add FLAGS_memory_stats_opt * Add STATS macro functions * Add StatAllocator * Fix CI errors * Add UT * Fix CI errors
-
由 pangyoki 提交于
* suppor inplace in tensor_method_setitem * delete bump_inplace_version * optimize inplace unittest * fix * fix setitem bug * update eager_generator * optimize inplace unittest * little change
-
- 28 3月, 2022 1 次提交
-
-
由 Haohongxiang 提交于
* add uts for EagerReducer * add more uts * fix bugs * fix bugs * modify * modify uts * fix bugs * update * update * update * solve conflicts and merge * add some other uts * modify time of uts * update * update * update * remove uts of resnet
-
- 25 3月, 2022 1 次提交
-
-
由 Jiabin Yang 提交于
* refactor eager flags * fix flags error when we switch from eager to dygraph * fix ci problem * fix ci * fix ci * merge develop and fix code style * merge develop and fix code style * fix op test error * fix op test error * fix op test error * fix op test error * fix op test error * merge develop
-
- 24 3月, 2022 1 次提交
-
-
由 lilong12 提交于
-
- 21 3月, 2022 1 次提交
-
-
由 kuizhiqing 提交于
-
- 19 3月, 2022 1 次提交
-
-
由 pangyoki 提交于
* [Eager] Support eager grad interface, draft version * Support eager grad interface with allow_unused and multi startup_op * Fix code format * Fix allow_unused case, return PyNone if tensor not initialize * Support output's stop_gradient related to create_graph * Support grad exception case in eager mode, fix coverage CI * Update ToPyObject, return PyNone if not initialize * AccumulationNode add FLAGS_retain_grad_for_all_tensor * Fix ci issue * Fix CI issue * fix, use core.eager.Tensor * Add func SetBufferSlotRankZeros for GradTensorHolder * Support retain_graph by using ClearTensorWrappers * Support retain_graph by using ClearTensorWrappers * Update retain_graph and no_grad_vars related test case * Update code gen logic for ClearTensorWrappers * Fix by override statement * fix override func args * Support retain_graph, update unit tests * Updated ClearTensorWrappers logic * fix grad python interface * Use deep copy and update unit tests * Polish code * Polish code * Fix CI issue, Deep copy only use when user set grad_tensors * Fix CI, use Backward instead RunBackward * Fix CI, Declare kernel explicitly in test file * Polish, remove vector of TensorWrapper * Refactor the logic of grad/backward, polish codes * Update code after merge upstream develop * Polish after merge upstream develop * Update to adapt new GradNodeBase superclass * Fix error introduced during conflict resolution * support inplace strategy in eager_fluid state * solve conflict * nothing * Update purify potential_startup_nodes logic * Fix errors * Polish code * Remove useless args for ToPyObject * Remove useless TensorWrappersSet * fix record conflict * Fix code-format, re-install pre-commit * fix tensor_wrapper bug * Fix pre-process logic for potential_startup_ops * Update unit tests, use eager mode * Fix conflicts * fix unittest timeout * little change Co-authored-by: NWeilong Wu <veyron_wu@163.com>
-
- 17 3月, 2022 1 次提交
-
-
由 Haohongxiang 提交于
-
- 15 3月, 2022 1 次提交
-
-
由 kuizhiqing 提交于
-
- 14 3月, 2022 1 次提交
-
-
由 Zhong Hui 提交于
[multiprocessing] Add paddle.incubate.multiprocessing for sharing tensors between python processes. (#37302) * Add support for paddle.multiprocessing * move multiprocessing to incubate.
-
- 11 3月, 2022 1 次提交
-
-
由 Yuang Liu 提交于
-
- 09 3月, 2022 3 次提交
- 07 3月, 2022 1 次提交
-
-
由 Ming-Xu Huang 提交于
* Added cuBlasLtHandle_t to device context. * Added fused_gemm_epilogue op. 1. Added fused_gemm_epilogue op to leverage cuBlastLt Epilogue. 2. Support fusion Act(X*Y + bias), X'dims >=2 and Y'dims shoule be 2. 2. Act currently only be supported ReLU. (Will add GeLU in the future). * Added UT to fused_gemm_epilogue op. * Added LinearAct Pattern 1. Added LinearAct into graph_pattern_detector.* to define (2.)'s pattern. 2. LinearAct is used to detect act(element_add(matmul_v2(x, w), bias)). 3. act currently only support ReLU (Will support GeLU in the future). * Added FuseGemmEpiloguePass 1, Added FuseGemmEpiloguePass to handle nn.Linear + Act{ReLU} fusion (GeLU will be supported in the future). 2. Only support matmul_v2 from nn.Linear. * Added pybind to BuildStrageter.fuse_gemm_epilogue_. * Added UT for fuse_gemm_epilogue_pass. * GeLU support and EpilogueSingleton 1. Added GeLU support to fused_gemm_epilogue op. 2. Added EpilogueSingleton to cache auxiliary pointer. 3. Added related UTs. * Rename cublaslt_epilogue_opto gemm_epilogue_op.*. * Added both train and infer pattern to LinearAct. 1. Added support of fwd graph with grap_ops linking to LinearAct. 2. Added related changes to fuse_gemm_epilogue_pass for above modification. * Changed CUDA requirement from 11.4 to 11.6 for fuse_gemm_epilogue_pass. * Added identity activation support to gemm_epilogue_op. * Added Linear Fusion (matmul_v2 + ele_add) 1. Added matmul_v2 + ele_add pattern to LinearActPattern. 2. Added matmul_v2 + ele_add support to fuse_gemm_epilogue_pass. * Rename gemm_epilogue_op.* to fused_gemm_epilogue_op.* * Add fused_gemm_epilogue_grad op. 1. Added fused_gemm_epilogue_grad to support backward epilogue fusion. * Add UTs to fused_gemm_epilogue_grad_op. * Change attribute name in fused_gemm_epilogue_grad_op for clearing. * Allow DX and DBias be dispensable to fused_gemm_epilogue_grad op. * Added ElementwiseAdd+Matmul+Act graph pattern detection. * Fuse backward of Linear( Act(x)) 1. Added backward fusion pass to Linear( Act(x)). 2. Added backward fusion pass to Linear(x). * Added UTs to backward fusion of Linear(Act(x)). * Complete document of arguments to fused_gemm_epilogue_op. * Made arguments of some functions pass by reference. * Modify code with review comments. 1. Made arguments of some function pass by reference. 2. Removed redundant code. 3. Followed Google code style to change code. * Made 'const' code style be consistent * Fixed random seed of python UTs. * Set Compiling constrains to cuBlasLt 1. Require CUDA 11.6+ 2. Remove fuse_gemm_epilogue related tests when CUDA < 11.6. * Code Reivew from Paddle 1. Changed arguments name is_first_gemm to without_x_gradient for clearing. 2. Applied PADDLE_THROW in fused_gemm_epilogue_op. * Remove EpilogueSingleton 1. Applied ReserveSpace to replace Epilogue for passing auxiliary pointers between FWD and BWD. * Fix a logical error and enhance UTs. 1. Added act op count checking in UTs. 2. Fix issue to fuse backward or ReLU(Linear(X)). 3. TODO: solve GELU fusion issues. * Fix Linear and GeLU fusion issues. 1. Modified graph_detech_pattern to fit with both linear wiht gelu or relu. 2. Modified data range in Uts to allow negative values. * Removed fused_gemm_epilogue_op.h. * Rename namespace pten to phi. * Rename name of arguments in fused_gemm_epilogue_op 1. bias -> Bias. 2. out -> Out. 3. reserve_space -> ReserveSpace. * Change EpiloguePassActivationCache as local variable. 1. Removed singleton in EpiloguePassActivationCache. 2. Made EpiloguePassActivationCache as an argument to each pass functions.
-
- 02 3月, 2022 2 次提交
-
-
由 ziyoujiyi 提交于
* delete gloo connect retry * the_one_ps dirs reconstruct * . * . * create the_one_ps dirs * create the_one_ps dirs * create the_one_ps dirs * create the_one_ps dirs * create the_one_ps dirs * create the_one_ps dirs * the one ps dirs modify * the one ps dirs modify * the one ps dirs modify * the one ps dirs modify * refactor ps optimize * refactor ps optimize * refactor ps optimize * . * . * . * . * . * . * refactor theoneps * the_one_ps * add ps pass unittest * add ps pass unittest * ps unitest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * ps unittest frame * add cpu_async_ps_mode test * add cpu_async_ps_mode test * add cpu_async_ps_mode test * ps unittest ready * ps unittest ready * solve dist_pass init conflict * solve import CommContext error * unittest ok * implement AllocateFrom * solve setup.py.in conflict * solve conflict * solve conflict * solve conflict * . * . * cpu-async-ps minimize test ok & gpu minimize test ok * add heter 2stage unittest * add heter 2stage unittest * add heter 2stage unittest * sync/geo test ok & fix heter_worker program ok * . * new fleet desc generator * new fleet_desc builder * new fleet_desc builder * . * . * correct ps.proto compile * . Co-authored-by: Nzkh2016 <zhangkaihuo@baidu.com>
-
由 wanghuancoder 提交于
* open eager when WITH_PYTHON, test=develop * refine, test=develop * refine, test=develop * add DWITH_PYTHON for gen_fluid_lib, test=develop
-
- 01 3月, 2022 1 次提交
-
-
由 zhangchunle 提交于
-
- 28 2月, 2022 1 次提交
-
-
由 zhangchunle 提交于
* update;test=cpu-py3
-
- 23 2月, 2022 1 次提交
-
-
由 ShenLiang 提交于
* add processgroup_nccl
-
- 22 2月, 2022 1 次提交
-
-
由 YUNSHEN XIE 提交于
-
- 21 2月, 2022 1 次提交
-
-
由 wanghuancoder 提交于
* disable some distribute test case when in CPU test env, test=develop * refine, test=develop * refine, test=develop * refine, test=develop
-
- 19 2月, 2022 1 次提交
-
-
由 sneaxiy 提交于
* add DistributedFusedLamb op * polish code * fix compile error * compatible with pten changement * fix rocm compile error * improve converage * update upstream/develop * fix cast_with_ptr.h * add FLAGS_distributed_lamb_divide_nranks_when_allreduce=1 * fix clip before allreduce * add use_master_param_norm * code polish * fix bug * fix ROCM ci
-