- 25 11月, 2021 5 次提交
-
-
由 WangXi 提交于
-
由 Zhanlue Yang 提交于
* Added GradTensorHolder to Eager Dygraph * Added accumulation codes to Eager Dygraph * Fix windows-ci issue * Fix NPU-CI issue * Fixed CI-Coverage issue
-
由 LiYuRio 提交于
-
由 xiongkun 提交于
* clear LoDTensorArray * fix bugs * fix * fix gpu
-
由 Wangzheee 提交于
-
- 24 11月, 2021 15 次提交
-
-
由 piotrekobiIntel 提交于
* Add second batch of deprecated mkldnn namespace and macro changes * Unlock CI * Fix temporary namespace alias placing
-
由 Yuang Liu 提交于
-
由 Zhanlue Yang 提交于
* Added EagerUtils to Eager Dygraph * Purified include dependencies for global_utils * Fixed merge conflicts
-
由 Aurelius84 提交于
-
由 Leo Chen 提交于
-
由 YuanRisheng 提交于
* elementwise_mul refactor * perfect code in test * delete redundant code * fix bugs when run test_multiply * adjust the location of macro * fix bugs when run ci
-
由 zyfncg 提交于
* add scalar and scalar_array * remove DenseTensor include from Scalar and ScalarArray * remove inner header from scalar_array * refactor the method of fill_constant and add some comment
-
由 Wangzheee 提交于
* matmul_convert_int8 * matmul_convert_int8 * matmulconvert_int8 * Matmul_int8_convert: tensor*tensor * Matmul_int8_convert: tensor*tensor * Matmul_int8_convert: tensor*tensor
-
由 zhaoyingli 提交于
* adapt auto search * adapt auto search * fix matmulv2 compatible * del debug
-
由 Aurelius84 提交于
-
由 0x45f 提交于
* run dy2stat pure fp16 in Linear model * no use self._pure_fp16_inputs * add test and fix Adam error in dy2stat pure fp16 training * use paddle.optimizer.Adam * run test in gpu * change test time for CI * enlarge atol for test_resnet_pure_fp16 * refine code and enlarge atol * make custom_white_list and custom_black_list take effect for AMP and pure fp16 * check tracer is not None * use default atol * change filter_size * change atol and add some NOTE
-
由 zhupengyang 提交于
-
由 feng_shuai 提交于
-
由 WangXi 提交于
-
由 Jiabin Yang 提交于
* Add EagerTensor and tests * remove useless enforce * remove comment in cmake * support autograd meta * support grad node info test * support grad_node_info * add more edge test * remove Python.h * add tensor wrapper with tests * support compute require grad and stop gradient * support sync methods and global utils * support pure cpu test * refine error msg * refine error msg * refine error info * fix npu error
-
- 23 11月, 2021 19 次提交
-
-
由 pangyoki 提交于
* fix inplace bug * fix custom grad input error * add unittest * fix inplace bug
-
由 Qi Li 提交于
* [XPU] Reorganize xpu device codes in platform, test=develop * fix xpu_header.h, test=develop
-
由 Li Min 提交于
Add support for bias is none for fused_attention op.
-
由 wanghuancoder 提交于
-
由 Yuang Liu 提交于
-
由 Feiyu Chan 提交于
-
由 wangxinxin08 提交于
* modify code about fp16 of dcnv2 trt
-
由 Zhanlue Yang 提交于
-
由 Leo Chen 提交于
* sync scope and variable_scope when init executor * set var_desc for new var
-
由 Chen Weihang 提交于
-
由 Wangzheee 提交于
* fix_nearest * fix_nearest * fix_nearest * fix_nearest
-
由 zmx 提交于
-
由 sneaxiy 提交于
* enhance scatter err msg check * fix ci error
-
由 YuanRisheng 提交于
* elementwise_div refactor * fix compile bugs in windows ci
-
由 Jiabin Yang 提交于
* Add EagerTensor and tests * remove useless enforce * remove comment in cmake * support autograd meta * support grad node info test * support grad_node_info * add more edge test * remove Python.h * refine error code * add error type in error msg * given default null name for tensor
-
由 ronnywang 提交于
* Added HCCL backend support in dynamic graph mode * fix segmentation fault * add ut
-
由 Zhanlue Yang 提交于
* Bug fix for snapshoting VariableWrapper with initialized tensor but empty allocation * Added unittest for inplace&clear_gradient
-
由 Chen Weihang 提交于
* adapt to inference api dir for pten * fix conflit with develop * fix test_egr_ds_eager_tensor compile failed
-
由 Aurelius84 提交于
* Add transfer_layout/dtype op * clean useless codes * fix unused var * add optest in white.txt * split into data_transfer.cc * fix cmake * modify according reviewer comment * replace cast_op with transfer_dtype_op
-
- 22 11月, 2021 1 次提交
-
-
由 Feiyu Chan 提交于
* disable copying of datatype when sharing buffer between two tensors. * fix for mkldnn operator kernels (elementwise_add, sum, softplus, softmax, scale, activation), mannually set the data type when reusing memory by ShareBufferWith.
-