1. 20 12月, 2021 3 次提交
  2. 19 12月, 2021 1 次提交
  3. 18 12月, 2021 2 次提交
  4. 17 12月, 2021 9 次提交
  5. 16 12月, 2021 9 次提交
    • S
      modify according to zhouwei's comment (#38166) · a37be82f
      Sing_chan 提交于
      a37be82f
    • X
      Add arc hyperbolic function op (#37076) · 36b7368d
      xiaoting 提交于
      * add activation
      
      * update activation_op
      
      * add unitest for activation
      
      * fix acosh for init, test=develop
      36b7368d
    • F
      Conv transpose eltwiseadd bn fuse pass (#37800) · e64f0997
      feng_shuai 提交于
      * conv_transpose_eltwiseadd_bn_fuse_pass
      
      * change timeout
      
      * add TIMEOUT
      
      * add random num for group and dilation
      
      * change PassCompat
      e64f0997
    • Y
      Add tests for PaddleInference Pass (#37676) · 96597a85
      yeliang2258 提交于
      * add test for conv_elementwise_add2_act_fuse_pass and conv_elementwise_add_act_fuse_pass
      
      * Add conv_eltwiseadd_bn_fuse_pass test and fix test_conv_elementwise_addX_act_fuse_pass
      
      * add tests for conv_act_mkldnn_fuse_pass
      
      * add test for conv_bias_mkldnn_fuse_pass
      
      * update code
      
      * add conv_act_mkldnn_fuse_pass for relu, relu6, swish, leaky_relu
      
      * update test
      
      * update
      
      * update bug
      
      * update
      
      * update pattern_detector
      
      * fix test_conv_eltwiseadd_bn_fuse_pass
      
      * add diff display notest;test=windows_ci_inference
      
      * fix
      
      * remove test_conv_act_mkldnn_fuse_pass.py
      
      * ifix
      96597a85
    • W
    • J
      support eager switch system (#38170) · 8305c2be
      Jiabin Yang 提交于
      * support eager switch system
      
      * polish code
      8305c2be
    • L
      Add fmax and fmin operators (#37826) · dd3afc9d
      LJQ❤️ 提交于
      Add elementwise_fmax and elementwise_fmin operators
      dd3afc9d
    • L
      Add sparse_attention mask ,test=develop (#37973) · fa463b90
      Liu-xiandong 提交于
      Add key_padding_mask and attn_mask in sparse_attention Api
      
      1.Key padding mask is a tensor with dimensions [batch_size, seq_len], and attention mask is a tensor with dimensions [seq_len, seq_len]. The data types of the two masks are consistent with Q, K, and V, which are float32 or float64. If the value in Mask is 0, it means that the position needs to be masked.
      
      2.The changed files are mainly paddle/fluid/operators/sparse_attention_op.cu and python/paddle/fluid/tests/unittests/test_sparse_attention_op.py. sparse_attention has three parts: sddmm, softmax, and dsd. Adding the mask operation only needs to modify the softmax. It has no effect on the other two parts. In addition, in order to test the mask function, related tests has been added.
      fa463b90
    • L
      Add float16 type for scatter op. (#38136) · 9bac4a76
      Li Min 提交于
      * Add float16 type for scatter op.
      
      * Add fp16 test for scatter op.
      
      * Add int and int64 support for scatter_grad on gpu.
      
      * Add int and int64 for check_variable_and_dtype routine.
      
      * Minors.
      
      * Code format.
      9bac4a76
  6. 15 12月, 2021 9 次提交
  7. 14 12月, 2021 7 次提交