- 10 1月, 2022 5 次提交
-
-
由 LiYuRio 提交于
-
由 wangxinxin08 提交于
-
由 andyjpaddle 提交于
* add maxunpool3d op * update doc for maxunpool3d op * update doc for maxunpool3d op * update doc for maxunpool3d op * update sample code for maxunpool3d * add maxunpool1d op * update some code for maxunpool1d
-
由 Guoxia Wang 提交于
-
由 Guoxia Wang 提交于
-
- 07 1月, 2022 10 次提交
-
-
由 YuanRisheng 提交于
* refactor flatten grad kernel * fix bugs when run ci unittest * fix bugs when use default GetExpectedPtenKernelArgs * xshape sometimes is has null holder ,fix this bugs
-
由 wangxinxin08 提交于
* add mish operator and api * remove redundant code and modify grad_atol of mish unittest * modify mish code to be consistent with other activation implementation
-
由 zhangbo9674 提交于
* add multi tensor for adam * add merged_adam op * refine code * refine adam compute logic
-
由 Aurelius84 提交于
-
由 niuliling123 提交于
-
由 guguguzi 提交于
* delete the modification of dygraph * CI * check CI * modify the retrun value of get_lr
-
由 wanghuancoder 提交于
-
由 LiYuRio 提交于
-
由 Leo Chen 提交于
-
由 Li Min 提交于
* Add fp16 support for scale/bias for fused_layernnorm_residual_dropout_bias op.
-
- 06 1月, 2022 18 次提交
-
-
由 Leo Chen 提交于
-
由 YuanRisheng 提交于
* move mid api and rename kernel * use empty kernel
-
由 jakpiase 提交于
* reuploaded files * Changed year from 2021 to 2022 * minor change * fixed requirements.txt file
-
由 JZ-LIANG 提交于
-
由 Thomas Young 提交于
-
由 chentianyu03 提交于
* move eigen/reduce.h imple into cpu/reduce.h * ctx to dev_ctx
-
由 wanghuancoder 提交于
-
由 Zhanlue Yang 提交于
* Handled special sum_grad_op code gen in Eager Dygraph * Fixed merge issues
-
由 baoachun 提交于
-
由 tianshuo78520a 提交于
-
由 minghaoBD 提交于
-
由 wenbin 提交于
* bug fix * remove blank
-
由 Roc 提交于
-
由 limingshu 提交于
* fix the wrong filename * first commit
-
由 zyfncg 提交于
* adjust the full kernel * remove creation.h * use Empty to create tensor in full
-
由 YuanRisheng 提交于
* move gpu_impl of elementwise kernel * change copyright to 2022
-
由 jakpiase 提交于
* added exp activation and use_dst_for_bwd kernels * CI RERUN * minor change
-
- 05 1月, 2022 7 次提交
-
-
由 Lijunhui 提交于
* init commit: new elem_mul_grad * add template speciallization for complex in multiply * reply review comments * correct dx and dy computation when T is complex * reply review comments * update to new ReduceRunctor * mul-output broadcast * call functions * call functions with comments * remove comments
-
由 From00 提交于
* Fix bug of GetAllocatorInterfaceTest * Replace some shared_ptr with unique_ptr * Change Alloc call
-
由 Jiaqi Liu 提交于
* make post training quant API support dataloader
-
由 joanna.wozna.intel 提交于
-
由 Qi Li 提交于
-
由 TTerror 提交于
-
由 wanghuancoder 提交于
* Rearranged Eager AutoCodeGen directory structure * Removed USE_OP in Eager AutoCodeGen * Enabled generation for Operators without Grad/Inputs/Outputs * Resolved operators without input * Fixed merge conflicts * Enabled Eager AutoCodeGen for 10+ more operators * Refactored Eager AutoCodeGen with more organized helper objects * Enabled Eager AutoCodeGen for operators with multiple OpBases * Adjusted Eager AutoCodeGen to Enable Passing Output Tensor as Input Argument * Handled Dispensable Inputs/Outputs in Eager AutoCodeGen * Adjusted function generation/call between Python-C API & Dygraph API * Synchronized auto-generated Python-C API with Dygraph Forward Functions * support more eager tensor api * fix merge compile error * fix compile error and fit develop code * support pure CPU * fix some logic error in eager_mode * support _varbase_creator in eager mode * Added safe_initialized interface to EagerTensor for use in processing dispensable inputs * for eager mode * refine * support multiple constructor for eager tensor * add place related code * polish code * specific randint with dtype of int64 * Support pure cpu test * eager logic * refine test in pure cpu * eager logic * eager logic * eager logic, test=develop * skip core.eager when in inference, test=develop * refine, test=develop * refine, test=develop * call RetainGrad after run forward kernel, test=develop * refine, test=develop * support dygraph util, meta, guard test * eager test case * support inference test * refine test and fix initializer failed * modify eagertensor patch method * add eagertensor.clear_grandint, test=develop * refine, test=develop * refine, test=develop * refine, test=develop * call monkey_patch_varbase in _test_eager_guard, test=develop * split clear_gradient to clear_gradient and zero_grads, test=develop * refine, test=develop * refine, test=develop * refine, test=develop Co-authored-by: Njim19930609 <jim19930609@gmail.com> Co-authored-by: NJiabinYang <360788950@qq.com>
-