- 22 12月, 2021 2 次提交
-
-
由 Guoxia Wang 提交于
-
由 joanna.wozna.intel 提交于
-
- 21 12月, 2021 10 次提交
-
-
由 zyfncg 提交于
* add inplace_map for trace_op in pybind * fix inplace problem of setitem * refactor the param format of trace_op Co-authored-by: Npangyoki <pangyoki@126.com>
-
由 baoachun 提交于
* update seqconv_eltadd_relu_fuse_pass ut * update ut * update ut * update ut
-
由 baoachun 提交于
* update squared_mat_sub_fuse_pass ut * update ut * update ut
-
由 Yuang Liu 提交于
-
由 baoachun 提交于
* add seqpool_cvm_concat_fuse_pass ut * rename ut name
-
由 sneaxiy 提交于
* mean first version * fix scalar mean * add fp16 dtype for api
-
由 yeliang2258 提交于
* fix timeout bug * update
-
由 baoachun 提交于
* update repeated_fc_relu_fuse_pass ut * update ut
-
由 Haohongxiang 提交于
* update * fix bugs * modify code style * fix bugs of _get_global_group
-
由 heliqi 提交于
* add timeout * add timeout * PassAutoScan base_line use same config * try run base_line * fix dropout Mask of output attr error * fix dropout Mask of output attr error
-
- 20 12月, 2021 8 次提交
-
-
由 sneaxiy 提交于
-
由 baoachun 提交于
* add mkldnn conv_transpose_bias fuse pass ut * update conv_transpose_bias_mkldnn_fuse_pass ut * update conv_transpose_bias_mkldnn_fuse_pass ut * update conv_transpose_bias_mkldnn_fuse_pass ut * restrict conv2d data_format in conv_transpose_bias_mkldnn_fuse_pass * update ut timeout setting * update ut
-
由 sneaxiy 提交于
* support FP16 for more ops * add amp list tests * refine reduce_mean_grad * fix OP benchmark ci * fix fp16 reduce_mean * updat ut, but still have some problems * remove mean/reduce_mean fp16 kernel
-
由 heliqi 提交于
* add matmul_scale matmul_v2_scale fuse pass * add scaletensor judge * modify var name * add timeout notest;test=coverag * fix error commit * fix use_mkldnn attr * fix use_mkldnn attr
-
由 0x45f 提交于
-
由 zhangbo9674 提交于
* add multi_tensor for momentum and clear_grads for optimizer * fix bug for dygraph * add unittest * refine comment * add param_group * refine regularizaiton logic * del clear_grads * add clear_grads * add dispensable check of None * refine clear_grad * fix build bug * refine code by comment * refine code * add multi tensor check * refine param_group update * add multi tensor for static mode * refine comments * delete useless comma for momentum * refine comment for momentum * refine code by commment
-
由 Yuang Liu 提交于
-
由 Feiyu Chan 提交于
-
- 19 12月, 2021 1 次提交
-
-
由 Baibaifan 提交于
-
- 18 12月, 2021 2 次提交
-
-
由 yeliang2258 提交于
* add test_conv_act_mkldnn_fuse_pass * update cmakelist * fix cmakelist * fix timeout * fix timeout * fix timeout * fix
-
由 Feiyu Chan 提交于
* add complex op and `paddle.complex`.
-
- 17 12月, 2021 9 次提交
-
-
由 caozhou 提交于
* add planner * add planner * add cost model update * add relaunch updation * update process_group * fix error * add unitest * update unitest * update cost model * avoid api problem
-
由 Jiabin Yang 提交于
* support more eager tensor api * support multiple constructor for eager tensor * add place related code * polish code * specific randint with dtype of int64 * Support pure cpu test * refine test in pure cpu * refine test in pure cpu
-
由 sneaxiy 提交于
* support multi precision update for LAMB * hide some api * fix ci uts * fix lamb output of dygraph * remove some changes to some PR * try to fix Py3 CI compile error * fix test_imperative_optimizer, add lars ut, add layer_norm ut * fix ut, fix format * fix ut * fix windows ci
-
由 feng_shuai 提交于
-
由 kuizhiqing 提交于
-
由 heliqi 提交于
* add timeout * add timeout
-
由 zhaoyingli 提交于
* add gpt modeling * update file name
-
由 Yuang Liu 提交于
-
由 WangXi 提交于
-
- 16 12月, 2021 8 次提交
-
-
由 Sing_chan 提交于
-
由 xiaoting 提交于
* add activation * update activation_op * add unitest for activation * fix acosh for init, test=develop
-
由 feng_shuai 提交于
* conv_transpose_eltwiseadd_bn_fuse_pass * change timeout * add TIMEOUT * add random num for group and dilation * change PassCompat
-
由 yeliang2258 提交于
* add test for conv_elementwise_add2_act_fuse_pass and conv_elementwise_add_act_fuse_pass * Add conv_eltwiseadd_bn_fuse_pass test and fix test_conv_elementwise_addX_act_fuse_pass * add tests for conv_act_mkldnn_fuse_pass * add test for conv_bias_mkldnn_fuse_pass * update code * add conv_act_mkldnn_fuse_pass for relu, relu6, swish, leaky_relu * update test * update * update bug * update * update pattern_detector * fix test_conv_eltwiseadd_bn_fuse_pass * add diff display notest;test=windows_ci_inference * fix * remove test_conv_act_mkldnn_fuse_pass.py * ifix
-
由 wuhuanzhou 提交于
-
由 Jiabin Yang 提交于
* support eager switch system * polish code
-
由 LJQ❤️ 提交于
Add elementwise_fmax and elementwise_fmin operators
-
由 Liu-xiandong 提交于
Add key_padding_mask and attn_mask in sparse_attention Api 1.Key padding mask is a tensor with dimensions [batch_size, seq_len], and attention mask is a tensor with dimensions [seq_len, seq_len]. The data types of the two masks are consistent with Q, K, and V, which are float32 or float64. If the value in Mask is 0, it means that the position needs to be masked. 2.The changed files are mainly paddle/fluid/operators/sparse_attention_op.cu and python/paddle/fluid/tests/unittests/test_sparse_attention_op.py. sparse_attention has three parts: sddmm, softmax, and dsd. Adding the mask operation only needs to modify the softmax. It has no effect on the other two parts. In addition, in order to test the mask function, related tests has been added.
-