- 17 12月, 2021 1 次提交
-
-
由 sneaxiy 提交于
* support multi precision update for LAMB * hide some api * fix ci uts * fix lamb output of dygraph * remove some changes to some PR * try to fix Py3 CI compile error * fix test_imperative_optimizer, add lars ut, add layer_norm ut * fix ut, fix format * fix ut * fix windows ci
-
- 03 12月, 2021 1 次提交
-
-
由 ronnywang 提交于
* refine structure for cuda and rocm * update * update * update * update
-
- 30 11月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add regularation and Nesterov for mergerd_momentum * refine unittest for use_nesterov attr * refine op check * refine code * fix bug * refine code of regularization_flag * delete useless code
-
- 29 11月, 2021 1 次提交
-
-
由 piotrekobiIntel 提交于
-
- 27 11月, 2021 1 次提交
-
-
由 Aganlengzi 提交于
* [NPU] reorganization for device API abstraction * [NPU] delete old files * [NPU] fix npu_collective_helper * [NPU] fix collective_helper * [NPU] fix ut * [NPU] mod memory allocation and hccl_helper * [NPU] fix place_type * [NPU] split enfoce.h * move acl* call into npu_info * merge conflict * fix merge * merge conflict * merge conflict
-
- 17 11月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* copy beta pow to same place when skip_update=1 * fix xpu
-
- 20 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
-
- 19 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
* add pow2_warmup op * remove contrib __all__ * add AttrT * rename * follow comments * fix duplicate PADDLE_RESTRICT
-
- 17 10月, 2021 1 次提交
-
-
由 Zeng Jinle 提交于
-
- 15 10月, 2021 2 次提交
-
-
由 Zeng Jinle 提交于
* remove wrong restrict * remove master_param_out __restrict__ * update
-
由 Zeng Jinle 提交于
-
- 14 10月, 2021 3 次提交
-
-
由 Zeng Jinle 提交于
-
由 Zeng Jinle 提交于
-
由 Zeng Jinle 提交于
* merge momentum ops * update * add ut to improve coverage * remove optimizer change * fix error msg * update ut * add __restrict__ for CUDA * update ut * move merged_momentum_op to optimizer dir * fix coverage
-
- 13 10月, 2021 1 次提交
-
-
由 limingshu 提交于
* A leap of try for cudaLaunchCooperativeKernel * fix bugs * Totally replace the lar cuda kernel * Fix bugs * a test for lars merge * Adding las_op_momentum infer_shape * Fix codes * use avg_numel instead of max_numel to acquire grid num * modify unittest files about lars op * Finally converge when merged-lars works * fix ctest files * add merged_operation kernel when cuda version is older than 11 * Fix code style * fix ctest failure * fix error * fix all ctest error and change lars compute code of cpu * fix bugs on v100. * revert python modififation about lars * revert python modification codes
-
- 27 9月, 2021 1 次提交
-
-
由 limingshu 提交于
* A leap of try for cudaLaunchCooperativeKernel * fix bugs * Totally replace the lar cuda kernel * Fix bugs * fix code according to comments * fix codes according to review comments * adding some function overload * relocate the power operation.
-
- 21 9月, 2021 1 次提交
-
-
由 Adam Osewski 提交于
* Create stateful OneDNNAXPYHandler object. This makes it possible to call it multiple times without recreating the oneDNN primitives every time. * Prepare SGDOpKernel to reuse its implementation from OneDNN kernel. * OneDNN SGD kernel. * Update call to use new OneDNNAXPYHandler object api. * Setup seed in proper place. * Enable OneDNN kernel only for single case. * For dense param and sparse grad. * Small refactor. * Enable oneDNN by op attr or by cmd line flag. * Use int64_t type for number of elements. * Support dense param and grad from OneDNN kernel. * Enable SGD OneDNN kernel when use MP BF16 optimizer. * Force non-copyable/movable OneDNNAXPYHandler. * Reuse OneDNNAXPYHandler for spare tensors in SUM op. * Fix SFINAE rules. * Remove recording event inside AXPY. * Get rid of internal primitive caching. * Stop use PP cache mechanims to store mem and primitive obj. * Handler obj store and reuse needed desc & prim * Do not derive from MKLDNNHandlerT
-
- 14 9月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* add layerwise learning rate for adamw * fix format * add unitest * add NotImplementedError * add gpu unitest * update gpuplace
-
- 13 9月, 2021 1 次提交
-
-
由 taixiurong 提交于
-
- 03 9月, 2021 1 次提交
-
-
由 TTerror 提交于
-
- 27 8月, 2021 1 次提交
-
-
由 Guoxia Wang 提交于
* sparse_momentum_op is used to save w@GRAD memory for gather_op when gather from a large parameter
-
- 25 8月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
-
- 23 8月, 2021 1 次提交
-
-
由 zhaoyingli 提交于
* adamw support cuda * adamw support cuda
-
- 18 8月, 2021 1 次提交
-
-
由 lzzyzlbb 提交于
* [npu]add rmsprop op
-
- 17 8月, 2021 1 次提交
-
-
由 Roc 提交于
-
- 11 8月, 2021 1 次提交
-
-
由 ronnywang 提交于
* add momentum_op_npu and test * update * fix hang
-
- 22 7月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* copy found_inf to cpu in advance to improve performance * add npu test * add npu test * refine code * refine memcpy op * fix adam
-
- 14 7月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* adam add input SkipUpdate * add unittest * add npu unittest * fix xpu compile * remove param stream
-
- 21 6月, 2021 1 次提交
-
-
由 lidanqing 提交于
* Add oneDNN AXPY handler. * Add fallback for small tensors. * Fix ifdefs * Remove unnecessary namespace prefixes and add missing headers. * Guard handler_axpy with proper ifdefs. * Compilation of this function is possible only when Paddle is not build with CUDA nor HIP. * Move AXPY handler code to separate files. * Use oneDNN AXPY handler in SGD op. * Use axpy handler only when Paddle is built with oneDNN. * Add test for SUM BF16 with big rows. * Fix SFINAE rules for elementwise_add_to. * Add test case for SGD with big rows. * update * update Co-authored-by: NAdam Osewski <adam.osewski@intel.com>
-
- 03 6月, 2021 1 次提交
-
-
由 Yuang Liu 提交于
-
- 26 5月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* refine ~npuOpRunner * implement destructor and forbid copy * use reference to avoid copy * use const reference * relax adam precision * fix top_k
-
- 13 5月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* add use_global_beta_pow * add use_global_beta_pow * update npu kernel * update python api * refine code * add ut for use_global_beta_pow * fix npu kernel * add ut for api * add ut for exception * add ut for save/load
-
- 10 5月, 2021 1 次提交
-
-
由 Leo Chen 提交于
-
- 28 4月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* add input EpsilonTensor for adam * update python api * add unit test * add npu test * add more ut
-
- 23 4月, 2021 1 次提交
-
-
由 Leo Chen 提交于
-
- 19 4月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* [NPU] support GarbageCollector for npu (#31874) * support GarbageCollector for npu * fix typo * fix gather_grad * disable NPUDefaultStreamGarbageCollector on NPU * [NPU] support npu for memcpy op (#31808) * support npu for memcpy op * add ut * fix ut * fix typo * 【NPU】fix bug of using temp vector (#31963) * fix bug when beta1_pow on cpu (#31995) * [NPU] support npu profiler (#31684) * support npu profiler * add python api * fix bugs * add wrapper for incomplete type * update profile proto * record npu wait * add xpu placeholder * fix adam (#32016) * [NPU] enable async copy and add wait before sync operation (#31956) * enable async copy and add wait before sync operation * remove unneccessary wait * add FillNpuTensorWithConstant * refine * fix fill_constant * make TensorFromVector/TensorToVector sync * [NPU] Support dataloader on npu place. (#31867) * [NPU] Wait on NPUPlace (#32086) * [NPU] fix cast op (#32121) * fix npu kernel of cast op to handle casting to same dtype * add comments * [NPU] support cann 20.3 (#32044) * fix compile problem on cann 20.3 * fix ut * fix test_mul * fix check_finite_and_scale * fix lookup_table_v2_grad * fix cmake * support print op * [NPU] Support npu save load (#31893) * support save load for NPU * add save load npu unittest * support np.array transform in NPU * fix errors * delete dygraph in unittest * add Wait * fix unittest * fix review comment * fix unittest problem * fix little problem * change aclrtSynchronizeDevice to aclrtSynchronizeStream for better performance (#32196) * change aclrtSynchronizeDevice to aclrtSynchronizeStream for better performace * refine code * fix NPUDeviceContext in all c++ unittest (#32198) * fix NPUDeviceContext in all c++ unittest * refine log Co-authored-by: Npangyoki <pangyoki@126.com> * [NPU] Remove TensorFromVector and avoid sync copy in npu op kernel for better performance (#31994) * enable async copy and add wait before sync operation * remove unneccessary wait * add FillNpuTensorWithConstant * refine * fix fill_constant * change TensorFromVector to FillNpuTensorWithConstant * fix ignored api * delete extra unittest * fix little error * fix update_loss_scaling_op_npu and check_finite_and_unscale_op_npu * change TensorCopySync to TensorCopy * delete useless Wait and add StreamWait * fix npu_stream error * fix check_finite_and_unscale_op_npu TensorCopy * only save stream wait * fix NPUDeviceContext in all c++ unittest * delete wait Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> * delete useless unittest file (#32206) * Fix op test (#32231) * fix conditional block (#32243) * fix adam bug again (#32246) * fix compile * fix ut * fix ut Co-authored-by: Nliym27 <33742067+liym27@users.noreply.github.com> Co-authored-by: Npangyoki <pangyoki@126.com>
-
- 15 4月, 2021 1 次提交
-
-
由 zhang wenhui 提交于
* merge 31065 * Fix typo of selected_npus (#31230) * merge 31249 * [NPU] Support npu op pow and pow grad (#31247) * [NPU] Support npu op: (1) pow (2) pow_grad * Support fp16 * Fix pow npu fp16 test (#31256) * support list of list attribute for NPU (#31299) * support list of list attribute for NPU * fix compile problem * fix reference * [NPU] Support npu op: (1) slice (2) slice_grad (#31275) * fix reading flags from env (#31329) * merge 31347 * [NPU] Support npu op layer_norm and layer_norm_grad (#31310) * init commit, add layer_norm npu kernel * fix typo * add unittest * add unittest * fix bug * fix bug * refine ut * [NPU] add npu kernel for equal op (#31393) * add npu kernel for equal op * refine code * add more ut * update year * [NPU] Support npu kernel for shape op (#31427) * add shape npu * fix * fix * fix endif (#31431) * Fix pow, use fillD instead of broadcast (#31433) * Fix pow, refine code (#31440) * fix cmake of cryptopp to avoid downloading every time (#31451) * [NPU] squeeze and unsqueeze op for ascend (#31452) Co-authored-by: Nroot <xiayanming@baidu.com> * Support npu kernel for gather op (#31458) * add gather npu op * code review done * update python new line * precommit * fix review * del commit * 【NPU】add scale op for npu (#31499) * add scale npu * fix * fix * Support TensorFormVector, TensorToVector of bool type (#31518) * support TensorFormVector, TensorToVector of bool type * add ut * fix compile problem * 【NPU】support npu kernel for fill_constant op (#31521) * add fill_constant npu * add fill_constant npu * fix * cherry-pick 31422, solve conflict * 【NPU】Support npu kernel for matmul op (#31544) * add matmulv2_npu * add matmul * add matmul * [NPU] Support npu op elementwise_mul and elementwise_mul_grad (#31571) * [NPU] Support npu op elementwise_max (#31574) * 【NPU】add relu op for npu (#31515) * add relu npu * fixed * fix * 【NPU】Suppert npu kernel for reshape2 op (#31524) * add reshape2 npu * add reshpe2 * [NPU] Support npu kernel for gather op fix bug (#31541) * add gather npu op * code review done * update python new line * precommit * fix review * del commit * update gather_grad * fix bug * fix bug * [NPU] Support npu kernel for amp_check_finite_and_unscale_npu op (#31457) * Support npu kernel for amp_check_finite_and_unscale_npu op * support EnforceNotMet exception * fix exception bug * modify python unittest * precommit * update c++ unittest * fix review * fix review * [NPU] accuracy op (#31492) * accuracy op * fix license * fix * add test and fix bug * [NPU] add Assign OP (#31561) * add assign op * add test assign npu test * dele if def Co-authored-by: Noyjxer <1728722986@qq.com> * [NPU] fix npu op elementwise_mul_grad (#31592) * 【NPU】Support npu op gelu and gelu_grad (#31530) * Support npu op gelu and gelu_grad * Support npu op gelu and gelu_grad * [NPU] fix assgin cmake (#31595) * fix gather_grad bug (#31607) * [NPU] add range op (#31560) * add range op * fix codestyle; call GetSize directly Co-authored-by: Noyjxer <1728722986@qq.com> * 【NPU】Support npu op elementwise_div and elementwise_div_grad (#31573) * Support npu op elementwise_div and elementwise_div_grad * Support npu op elementwise_div and elementwise_div_grad * Support npu op elementwise_div and elementwise_div_grad * [NPU] Support npu op log, log_grad, sqrt, sqrt_grad, square, tanh and tanh_grad (#31600) * [NPU] Support npu op logicalnot_op (#31534) * [NPU] Support npu op elementwise_min (#31575) * [NPU] Support npu op elementwise_pow (#31576) * [NPU] Support npu op table_lookup_v2 and table_lookup_v2_grad (#31399) * [npu] support npu kernel `table_lookup_v2` * clean up * +python test * +cmake * clean up * remove int8 kernel + python unitest for fp16 * clean up * [NPU] support npu kernel for `less_than` (#31327) * [npu] support npu kernel for `less than` * remove int* kernel * cleanup * [NPU] Support npu kernel scatter op (#31624) * Support npu kernel scatter op * Add more test * [NPU] fix allocator min chunk size (#31632) * [NPU] Support NPU kernel cast op (#31635) Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com> * [NPU] add npu kernel for sgd (#31639) * 【NPU】Support NPU kernel for reduce_sum op v2 (#31620) * add reduce_sum * fix broadcastd * fix test * fix * add unsqueeze in reduce_sum * add template * add unittest for keep_dim * test reduce_all Co-authored-by: Nfrankwhzhang <frankwhzhang@126.com> * [NPU] add npu kernel for adam (#31644) * add npu kernel for adam * refine code * disable test * modify atol * 【NPU】Support npu kernel for mul op (#31584) * add mul * add test mul * [NPU] add npu kernel for softmax_with_cross_entropy (#31656) * init * fix bugs * [NPU] add npu kernel for mean Op (#31562) * update mean op * update mean op * give a better test activation Co-authored-by: Noyjxer <1728722986@qq.com> * Revert "[NPU] add npu kernel for mean Op (#31562)" (#31665) This reverts commit 468ac699. * 【NPU】Add TensorCopy to NPU kernel for reduce_sum op (#31667) * update unittest * add TensorCopy in npu grad kernel * [NPU] Support npu op `expand` (#31405) * [npu] support npu kernel for `expand` * [NPU] fix shape of dx in mul_grad (#31675) * fix shape of dx * refine code * [NPU] add Increment op (#31563) * add increment * fix * update test increment op inplace * update increment op * increment b = 2 Co-authored-by: Noyjxer <1728722986@qq.com> * [NPU] add NPU add topk (#31596) * add topk op * add cmake * update topk npu op * refactor func * fix test not go npu TopKD bug * NPUPlace(4) to NPUPlace(0) * update comment Co-authored-by: Noyjxer <1728722986@qq.com> * [NPU] Support NPU kernel sum op (#31671) * [NPU] npu support `transpose` (#31486) * cherry-pick 31564, solve conflict * [NPU] Fix bug: Fix calculation errors of pow grad npu kernel (#31699) * [NPU] Support testing grad of NPU ops in OpTest (#31697) * [NPU] Support NPU kernel of stack op (#31711) * [NPU] Remove redundant ctest of top_k_op_npu_test (#31718) * [NPU] fix reshape npu op kernel (#31726) * rename npu op file * fix reshape * [NPU] change transpose to transpose2 (#31734) * change transpose to transpose2 * fix bug * [NPU] Support mean npu kernel (#31729) * [NPU] fix some bugs of npu op (#31739) * fix softmax * fix mean * fix lookup_table_v2 * 【NPU】Fix npu kernel elementwise_div_grad (#31753) * [NPU] fix the grad kernel diff bug of gather op (#31757) * fix gather grad kernel diff * fix gather grad kernel diff * fix gather review bug * 【NPU】Fix reshape test & add grad test (#31776) * fix * fix * [NPU] support fp16 for npu accuracy op (#31797) * [NPU] support list of tensor input (#31801) * support list of tensor as npu input * add comment * fix typo * fix typo * [NPU] add npu kernel for concat op (#31695) * add npu kernel for concat op * add npu kernel for concat op * refine code * update * refine concat_grad * [NPU] Support npu kernel for op elementwise_floordiv (#31822) * [NPU] fix bug of lookup_table_v2_grad (#31834) * [NPU] support default stream (#31510) * [NPU] support mixed precision input for npu layer norm (#31847) * support mixed precision input for npu layer norm * fix layer_norm npu kernel Co-authored-by: Nzhiqiu <chenqiuliang@baidu.com> * 【NPU】Support npu kernel for update_loss_scaling op (#31830) * add update_loss_scaling_npu NPU kernel * change TensorFromVec to Memset * fix compile problem (#31850) * [NPU] support npu for conditional_block op (#31854) * 【NPU】Add int dtype kernel for reshape2 op (#31864) * fix * fix * [NPU] fix some op bugs (#31855) * fix some op bugs * fix some bugs * follow comments * fix log level * add ut * [NPU] support fp16 of input for api pow (#31871) * [NPU] add npu kernel for truncated_gaussian_random op (#31654) * init * add todo * add npu kernel for truncated_gaussian_random * add sync * fix concat_grad * fix typo * fix compile * fix compile * fix compile * fix compile * fix compile * fix compile * fix code style * fix code style * fix code * Fix op test (#32231) * fix conditional block (#32243) * fix style code Co-authored-by: Nxiayanming <41795079@qq.com> Co-authored-by: NLeo Chen <chenqiuliang@baidu.com> Co-authored-by: Nliym27 <33742067+liym27@users.noreply.github.com> Co-authored-by: NReventon_L <luyuxiang1994@qq.com> Co-authored-by: Nroot <xiayanming@baidu.com> Co-authored-by: Noyjxer <1728722986@qq.com> Co-authored-by: Nyinhaofeng <66763551+yinhaofeng@users.noreply.github.com> Co-authored-by: NOleNet <olenet@126.com> Co-authored-by: NMeiyim <chen_xuyi@outlook.com> Co-authored-by: Noyxuan-11 <963650125@qq.com> Co-authored-by: Npangyoki <pangyoki@126.com>
-
- 14 4月, 2021 1 次提交
-
-
由 Adam Osewski 提交于
* Initial draft for SGD BG16 kernel. * Unit tests for SGD with BF16 data type. * Add VLOG message to SGD BF16 op CPU kernel. * Enhance error messages and error types. * Refactor SGD op kernels to leverage some common code. * Make easier to add new kerne invoke code. * Fix SGD op kernel for sparse grad. * Unify quotes style. * Fix error for ROCM compilation. * Use specialized PADDLE_ENFORCE_xx functions.
-
- 31 3月, 2021 1 次提交
-
-
由 taixiurong 提交于
-
- 02 3月, 2021 1 次提交
-
-
由 Gradie 提交于
* lamb_op_xpu;test=kunlun * modify lamb_op_xpu.cc;test=kunlun * delete atol lamb_op_xpu; test=kunlun * update xpu.cmake;test=kunlun * test_error 1e-5,lamb_op_xpu;test=kunlun * error1e-5,lamb_op_xpu,test=kunlun * delete atol lamb_xpu;test=kunlun * modify atol,lamb_op_xpy;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu, XPUOptest;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu,modify xpu_cmake; test=kunlun * lamb_op_xpu;test=kunlun * lamb_op_xpu,modify xpucmake;test=kunlun
-