- 10 1月, 2022 3 次提交
-
-
由 andyjpaddle 提交于
* add maxunpool3d op * update doc for maxunpool3d op * update doc for maxunpool3d op * update doc for maxunpool3d op * update sample code for maxunpool3d * add maxunpool1d op * update some code for maxunpool1d
-
由 Guoxia Wang 提交于
-
由 Guoxia Wang 提交于
-
- 07 1月, 2022 7 次提交
-
-
由 YuanRisheng 提交于
* refactor flatten grad kernel * fix bugs when run ci unittest * fix bugs when use default GetExpectedPtenKernelArgs * xshape sometimes is has null holder ,fix this bugs
-
由 wangxinxin08 提交于
* add mish operator and api * remove redundant code and modify grad_atol of mish unittest * modify mish code to be consistent with other activation implementation
-
由 zhangbo9674 提交于
* add multi tensor for adam * add merged_adam op * refine code * refine adam compute logic
-
由 niuliling123 提交于
-
由 LiYuRio 提交于
-
由 Leo Chen 提交于
-
由 Li Min 提交于
* Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op.
-
- 06 1月, 2022 12 次提交
-
-
由 Leo Chen 提交于
-
由 YuanRisheng 提交于
* move mid api and rename kernel * use empty kernel
-
由 Thomas Young 提交于
-
由 chentianyu03 提交于
* move eigen/reduce.h imple into cpu/reduce.h * ctx to dev_ctx
-
由 wanghuancoder 提交于
-
由 Zhanlue Yang 提交于
* Handled special sum_grad_op code gen in Eager Dygraph * Fixed merge issues
-
由 wenbin 提交于
* bug fix * remove blank
-
由 limingshu 提交于
* fix the wrong filename * first commit
-
由 zyfncg 提交于
* adjust the full kernel * remove creation.h * use Empty to create tensor in full
-
由 YuanRisheng 提交于
* move gpu_impl of elementwise kernel * change copyright to 2022
-
由 jakpiase 提交于
* added exp activation and use_dst_for_bwd kernels * CI RERUN * minor change
-
- 05 1月, 2022 15 次提交
-
-
由 Lijunhui 提交于
* init commit: new elem_mul_grad * add template speciallization for complex in multiply * reply review comments * correct dx and dy computation when T is complex * reply review comments * update to new ReduceRunctor * mul-output broadcast * call functions * call functions with comments * remove comments
-
由 From00 提交于
* Fix bug of GetAllocatorInterfaceTest * Replace some shared_ptr with unique_ptr * Change Alloc call
-
由 joanna.wozna.intel 提交于
-
由 TTerror 提交于
-
由 wanghuancoder 提交于
* Rearranged Eager AutoCodeGen directory structure * Removed USE_OP in Eager AutoCodeGen * Enabled generation for Operators without Grad/Inputs/Outputs * Resolved operators without input * Fixed merge conflicts * Enabled Eager AutoCodeGen for 10+ more operators * Refactored Eager AutoCodeGen with more organized helper objects * Enabled Eager AutoCodeGen for operators with multiple OpBases * Adjusted Eager AutoCodeGen to Enable Passing Output Tensor as Input Argument * Handled Dispensable Inputs/Outputs in Eager AutoCodeGen * Adjusted function generation/call between Python-C API & Dygraph API * Synchronized auto-generated Python-C API with Dygraph Forward Functions * support more eager tensor api * fix merge compile error * fix compile error and fit develop code * support pure CPU * fix some logic error in eager_mode * support _varbase_creator in eager mode * Added safe_initialized interface to EagerTensor for use in processing dispensable inputs * for eager mode * refine * support multiple constructor for eager tensor * add place related code * polish code * specific randint with dtype of int64 * Support pure cpu test * eager logic * refine test in pure cpu * eager logic * eager logic * eager logic, test=develop * skip core.eager when in inference, test=develop * refine, test=develop * refine, test=develop * call RetainGrad after run forward kernel, test=develop * refine, test=develop * support dygraph util, meta, guard test * eager test case * support inference test * refine test and fix initializer failed * modify eagertensor patch method * add eagertensor.clear_grandint, test=develop * refine, test=develop * refine, test=develop * refine, test=develop * call monkey_patch_varbase in _test_eager_guard, test=develop * split clear_gradient to clear_gradient and zero_grads, test=develop * refine, test=develop * refine, test=develop * refine, test=develop Co-authored-by: Njim19930609 <jim19930609@gmail.com> Co-authored-by: NJiabinYang <360788950@qq.com>
-
由 wangxinxin08 提交于
-
由 chentianyu03 提交于
* change 'math' to 'math_kernel' * fix compile bugs * merge develop * fix compile bugs * fix compile bugs * move reduce files by new rule * add set header * format code style * merge develop and fix conflict * merge develop and fix conflict Co-authored-by: NYuanRisheng <yuanrisheng@baidu.com>
-
由 Chen Weihang 提交于
* polish infermeta filename * polish infermeta filename
-
由 jakpiase 提交于
* fix for matmul_v2 broadcasting * fix for output shape not broadcasted
-
由 Wilber 提交于
* c_api support std::string * update * update * add NOTE * fix delete error.
-
由 joanna.wozna.intel 提交于
* Quantize nearest_interp and nearest_interp_v2 * Check if avx_core supported * Add depthwise_conv2d to supported quantization list
-
由 TTerror 提交于
* add huber_loss for kunlun * update xpu.cmake * update unitests * update unitests * update elementwise_add * update elementwise_add * update elementwise_add
-
由 Weilong Wu 提交于
* Support EagerTensor init with kwargs * Updated comments * Updated unit tests case * Refactor InitTensor related code to reduce duplicate code * Updated the error reporting msg * Updated VLOG msg * Merge develop and Update EagerTensor init func * Polish switch case, reduce some code * Add SyntaxError unit test case * Refactor the related initialization func of EagerTensor * Remove ParseStopGradient and ParseZeroCopy and ParsePersistable, construct ParseBooleanArgs instead. * Updated error msg to pass CI * Updated PADDLE_ENFORCE error type
-
由 crystal 提交于
* add elementwise div * move mul and div grad functor * Combine multiple CUDA kernels * Update the reduce interface call * add multi-output * add multi-output div * add branch judge * Package branch * Combine the x and y functions into one
-
由 王明冬 提交于
-
- 04 1月, 2022 3 次提交
-
-
由 niuliling123 提交于
Add OpFunctor and replace cast, scale, clip, bce_loss and abs_grad with elementwise_no_broadcast (#38500)
-
由 Qi Li 提交于
-
由 Aurelius84 提交于
* Fix memcpyD2H sync behavior with other stream * add wait * add wait * add wait
-