- 10 1月, 2022 22 次提交
-
-
由 Yulong Ao 提交于
* Add the backward support for QR * Remove unnecessary comments
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 HydrogenSulfate 提交于
-
由 Aganlengzi 提交于
This reverts commit ee813e34.
-
由 Leo Chen 提交于
-
由 Chen Weihang 提交于
* unify infer_shape func calling * support set grad infer shape fn for custom op * unify infershape in new executor and eager * remove todo comment * revert infershape in operator
-
由 wangxinxin08 提交于
-
由 andyjpaddle 提交于
* add maxunpool3d op * update doc for maxunpool3d op * update doc for maxunpool3d op * update doc for maxunpool3d op * update sample code for maxunpool3d * add maxunpool1d op * update some code for maxunpool1d
-
由 Guoxia Wang 提交于
-
- 07 1月, 2022 5 次提交
-
-
由 wangxinxin08 提交于
* add mish operator and api * remove redundant code and modify grad_atol of mish unittest * modify mish code to be consistent with other activation implementation
-
由 zhangbo9674 提交于
* add multi tensor for adam * add merged_adam op * refine code * refine adam compute logic
-
由 Aurelius84 提交于
-
由 guguguzi 提交于
* delete the modification of dygraph * CI * check CI * modify the retrun value of get_lr
-
由 wanghuancoder 提交于
-
- 06 1月, 2022 8 次提交
-
-
由 jakpiase 提交于
* reuploaded files * Changed year from 2021 to 2022 * minor change * fixed requirements.txt file
-
由 JZ-LIANG 提交于
-
由 Thomas Young 提交于
-
由 Zhanlue Yang 提交于
* Handled special sum_grad_op code gen in Eager Dygraph * Fixed merge issues
-
由 baoachun 提交于
-
由 minghaoBD 提交于
-
由 Roc 提交于
-
由 jakpiase 提交于
* added exp activation and use_dst_for_bwd kernels * CI RERUN * minor change
-
- 05 1月, 2022 5 次提交
-
-
由 Jiaqi Liu 提交于
* make post training quant API support dataloader
-
由 Qi Li 提交于
-
由 wanghuancoder 提交于
* Rearranged Eager AutoCodeGen directory structure * Removed USE_OP in Eager AutoCodeGen * Enabled generation for Operators without Grad/Inputs/Outputs * Resolved operators without input * Fixed merge conflicts * Enabled Eager AutoCodeGen for 10+ more operators * Refactored Eager AutoCodeGen with more organized helper objects * Enabled Eager AutoCodeGen for operators with multiple OpBases * Adjusted Eager AutoCodeGen to Enable Passing Output Tensor as Input Argument * Handled Dispensable Inputs/Outputs in Eager AutoCodeGen * Adjusted function generation/call between Python-C API & Dygraph API * Synchronized auto-generated Python-C API with Dygraph Forward Functions * support more eager tensor api * fix merge compile error * fix compile error and fit develop code * support pure CPU * fix some logic error in eager_mode * support _varbase_creator in eager mode * Added safe_initialized interface to EagerTensor for use in processing dispensable inputs * for eager mode * refine * support multiple constructor for eager tensor * add place related code * polish code * specific randint with dtype of int64 * Support pure cpu test * eager logic * refine test in pure cpu * eager logic * eager logic * eager logic, test=develop * skip core.eager when in inference, test=develop * refine, test=develop * refine, test=develop * call RetainGrad after run forward kernel, test=develop * refine, test=develop * support dygraph util, meta, guard test * eager test case * support inference test * refine test and fix initializer failed * modify eagertensor patch method * add eagertensor.clear_grandint, test=develop * refine, test=develop * refine, test=develop * refine, test=develop * call monkey_patch_varbase in _test_eager_guard, test=develop * split clear_gradient to clear_gradient and zero_grads, test=develop * refine, test=develop * refine, test=develop * refine, test=develop Co-authored-by: Njim19930609 <jim19930609@gmail.com> Co-authored-by: NJiabinYang <360788950@qq.com>
-
由 wawltor 提交于
* add the examples for the mm * fix the document of paddle.mm
-
由 jakpiase 提交于
* fix for matmul_v2 broadcasting * fix for output shape not broadcasted
-