- 17 12月, 2021 6 次提交
-
-
由 feng_shuai 提交于
-
由 kuizhiqing 提交于
-
由 heliqi 提交于
* add timeout * add timeout
-
由 zhaoyingli 提交于
* add gpt modeling * update file name
-
由 Yuang Liu 提交于
-
由 WangXi 提交于
-
- 16 12月, 2021 9 次提交
-
-
由 Sing_chan 提交于
-
由 xiaoting 提交于
* add activation * update activation_op * add unitest for activation * fix acosh for init, test=develop
-
由 feng_shuai 提交于
* conv_transpose_eltwiseadd_bn_fuse_pass * change timeout * add TIMEOUT * add random num for group and dilation * change PassCompat
-
由 yeliang2258 提交于
* add test for conv_elementwise_add2_act_fuse_pass and conv_elementwise_add_act_fuse_pass * Add conv_eltwiseadd_bn_fuse_pass test and fix test_conv_elementwise_addX_act_fuse_pass * add tests for conv_act_mkldnn_fuse_pass * add test for conv_bias_mkldnn_fuse_pass * update code * add conv_act_mkldnn_fuse_pass for relu, relu6, swish, leaky_relu * update test * update * update bug * update * update pattern_detector * fix test_conv_eltwiseadd_bn_fuse_pass * add diff display notest;test=windows_ci_inference * fix * remove test_conv_act_mkldnn_fuse_pass.py * ifix
-
由 wuhuanzhou 提交于
-
由 Jiabin Yang 提交于
* support eager switch system * polish code
-
由 LJQ❤️ 提交于
Add elementwise_fmax and elementwise_fmin operators
-
由 Liu-xiandong 提交于
Add key_padding_mask and attn_mask in sparse_attention Api 1.Key padding mask is a tensor with dimensions [batch_size, seq_len], and attention mask is a tensor with dimensions [seq_len, seq_len]. The data types of the two masks are consistent with Q, K, and V, which are float32 or float64. If the value in Mask is 0, it means that the position needs to be masked. 2.The changed files are mainly paddle/fluid/operators/sparse_attention_op.cu and python/paddle/fluid/tests/unittests/test_sparse_attention_op.py. sparse_attention has three parts: sddmm, softmax, and dsd. Adding the mask operation only needs to modify the softmax. It has no effect on the other two parts. In addition, in order to test the mask function, related tests has been added.
-
由 Li Min 提交于
* Add float16 type for scatter op. * Add fp16 test for scatter op. * Add int and int64 support for scatter_grad on gpu. * Add int and int64 for check_variable_and_dtype routine. * Minors. * Code format.
-
- 15 12月, 2021 9 次提交
-
-
由 baoachun 提交于
* update mkldnn scale_matmul fuse pass ut * update mkldnn scale_matmul_fuse_pass ut
-
由 baoachun 提交于
* add mkldnn conv3d_bias_mkldnn_fuse_pass ut * update conv3d_bias_mkldnn_fuse_pass ut * disable conv3d_bias_mkldnn_fuse_pass
-
由 YuanRisheng 提交于
* fix bugs in Translated layer when change train/eval * fix python converage
-
由 0x45f 提交于
* fix error when tensor_shape_transformer. Before in stmt like `if len(paddle.shape(x)[0]) > 0`, `paddle` will be used as a variable * handle other call like `fluid.layers.mean` and `fluid.layers.shape` * add unit test
-
由 Skr.B 提交于
* add hinge_embedding_loss * fix test_API * test_API succeed * add English doc * fixed using of expired fluid api * fix doc * fix doc and rm python/paddle/fluid/layers/loss.py * get raw python/paddle/fluid/layers/loss.py back * fix Examples bug in English doc * unique -> flatten * fix api code * fix English doc * fix functional loss English doc * fix Example doc * .numpy() -> paddle.unique() * fix unique * fix label_item_set * modified judgment equation * Got a beautiful loss equation * use paddle.to_tensor * fix loss and add static check * fix loss and add static check * delta -> margin
-
由 baoachun 提交于
* update mkldnn conv_concat_relu_mkldnn_fuse_pass ut * update conv_concat_relu_mkldnn_fuse_pass ut * restrict conv2d data_format in conv_concat_relu_mkldnn_fuse_pass
-
由 feng_shuai 提交于
-
由 zhouweiwei2014 提交于
* add new API:paddle.movedim/moveaxis * add new API:paddle.movedim/moveaxis * add new API:add new API:paddle.movedim/moveaxis * fix comment * fix comment
-
由 Chen Weihang 提交于
-
- 14 12月, 2021 16 次提交
-
-
由 Aurelius84 提交于
* Enhance error msg in paddle.assign in static mode * fix unittest
-
由 caozhou 提交于
* update Planner * update unitest * update PlanSpace * update PlanSpace * modify set_grad_var_shape * update code style
-
由 baoachun 提交于
* add conv_gelu_mkldnn_fuse_pass * add post ops
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 jianghaicheng 提交于
-
由 chentianyu03 提交于
* layer.to api support numpy.dtype and paddle.dtype * skip the layer to eaxmple code executing for env not support
-
由 feng_shuai 提交于
* test_mkldnn_depthwise_conv_pass * test: add TimeOut * sset TIMEOUT * fix:add random num for dilation and group
-
由 heliqi 提交于
* add layer_norm_fuse_pass test case * restore cmakelist code * Merge branch 'develop' into layer_norm_fuse_pass * Merge branch 'develop' into layer_norm_fuse_pass * add bad case test
-
由 sneaxiy 提交于
* add white list for dist passes * update comment * follow zhiqiu's comment * fix PassContext attrs type
-
由 Sylwester Fraczek 提交于
* reshape+transpose+matmul_v2 * in_name->input_name * fix pr-ci-static-check
-