1. 02 3月, 2020 1 次提交
  2. 28 2月, 2020 1 次提交
  3. 07 1月, 2020 1 次提交
  4. 04 1月, 2020 1 次提交
  5. 23 12月, 2019 1 次提交
  6. 11 12月, 2019 1 次提交
  7. 02 12月, 2019 1 次提交
  8. 28 11月, 2019 1 次提交
  9. 27 11月, 2019 1 次提交
  10. 26 11月, 2019 1 次提交
    • G
      Add fc padding to improve mkl GEMM's performance when N and K are multiple of 128. (#20972) · 234060f8
      GaoWei8 提交于
      * Add fc padding to solve mkl performance
      test=develop
      
      * fix gpu pass and error information
      test=develop
      
      * fix fc_fuse_pass_test
      test=develop
      
      * fix error information
      test=develop
      
      * fix error information
      test=develop
      
      * fix name and add fc op padding test
      test=develop
      
      * fix attributes
      test=develop
      
      * optimize fc padding
      test=develop
      
      * fix test
      test=develop
      234060f8
  11. 22 11月, 2019 1 次提交
    • L
      add dequantize_abs_max op and modify lookup_table op (#20899) · f0b15184
      Liufang Sang 提交于
      * add int8 kernel to lookup_table op and add dequantize op test=develop
      
      * change paddle_enforce to paddle_enforce_eq test=develop
      
      * change copyright and change some not suitable code test=develop
      
      * remove debug log test=develop
      
      * replace GetInputType with IndicateVarDataType test=develop
      
      * fix EmptyGradMaker test=develop
      
      * fix diff between cpu and gpu test=develop
      
      * use memcopy when int8_t test=develop
      f0b15184
  12. 14 11月, 2019 1 次提交
  13. 12 11月, 2019 1 次提交
    • L
      fix the computation for dx (grad for x) for prelu operation. (#20949) · e249d9a3
      lilong12 提交于
      * set the default value of alpha for prelu to 0.25, test=develop
      
      * add the call to __syncthreads(), test=develop
      
      * fix the implementation of cpu prelu, test=develop
      
      * repair the implementation of element mode prelu, test=develop
      
      * modify test_prelu_op.py, test=develop
      e249d9a3
  14. 08 11月, 2019 1 次提交
    • C
      Add dependency for error_codes.proto (#21084) · 2f27b103
      Chen Weihang 提交于
      * fix activation_functions deps, test=develop, test=document_fix
      
      * add error_codes_proto deps, test=develop, test=document_fix
      
      * try delete enforce.h, test=develop, test=document_fix
      2f27b103
  15. 05 11月, 2019 2 次提交
  16. 01 11月, 2019 1 次提交
  17. 31 10月, 2019 2 次提交
  18. 30 10月, 2019 1 次提交
  19. 28 10月, 2019 1 次提交
  20. 23 10月, 2019 1 次提交
  21. 16 10月, 2019 1 次提交
  22. 13 10月, 2019 1 次提交
  23. 09 10月, 2019 1 次提交
  24. 07 10月, 2019 1 次提交
  25. 30 9月, 2019 1 次提交
  26. 29 9月, 2019 1 次提交
    • L
      fix conv2d and conv3d: (#20042) · 3aa331d9
      liym27 提交于
      1.support asymmetric padding;
          2.support padding algorithm:"SAME" and "VALID";
          3.support channel_last: data_format NHWC and NDHWC;
          4.change doc of python API and c++;
      
          test=develop, test=document_preview
      3aa331d9
  27. 28 9月, 2019 1 次提交
    • L
      fix pool2d pool3d,support asymmetric padding and channel_last (#19739) · 24010472
      liym27 提交于
      * fix pool2d pool3d:
      1. support asymmetric padding;
      2. support padding algorithm:"SAME" and "VALID";
      3. support channel_last: data_format NHWC and NDHWC;
      4. support inferring shape when input with negative dims in compile time;
      5. change doc of python API and c++;
      6. fix bug in cuda kernel when Attr(adaptive) is true.
      
      test=develop,test=document_preview
      
      * fix 'tensors' to 'Tensors'. test=develop,test=document_preview
      
      * add test for converage ValueError.test=develop,test=document_preview
      
      * resolve conflict in test_pool2d. test=develop
      24010472
  28. 27 9月, 2019 1 次提交
  29. 25 9月, 2019 1 次提交
    • B
      add support of matmul with multiple head even different width and height (#19708) · c670058a
      Bob Zhu 提交于
      * add support of matmul with multiple head even different width and height
      
      Original matmul with multiple head supports only the mat_a.width == mat_b.height,
      in that case, mat_b will be horizontally split. In this patch, we extend the
      support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
      in this case, mab_b will be vertically split.
      
      One example is A is [3, 8], B is [2, 16], head_number is 4. In this
      case, A will be split as [3, 2], B will be (vertically) split as
      [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
      
      test=develop
      
      * add support of matmul with multiple head even different width and height
      
      Original matmul with multiple head supports only the mat_a.width == mat_b.height,
      in that case, mat_b will be horizontally split. In this patch, we extend the
      support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
      in this case, mab_b will be vertically split.
      
      One example is A is [3, 8], B is [2, 16], head_number is 4. In this
      case, A will be split as [3, 2], B will be (vertically) split as
      [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
      
      test=develop
      
      * refactor the code of matmul with multiple head even different width and height
      
      test=develop
      c670058a
  30. 23 9月, 2019 1 次提交
  31. 20 9月, 2019 1 次提交
  32. 16 9月, 2019 1 次提交
  33. 11 9月, 2019 2 次提交
    • H
      Replace TemporaryAllocator by CUDADeviceContextAllocator (#18989) · 12542320
      Huihuang Zheng 提交于
      TemporaryAllocator is a singleton used for allocating memory for Cudnn. Since it is a singleton, we can delete it for better performance in memory.
      
      We replace TemporaryAllocator by CUDADeviceContextAllocator and CUDADeviceContextAllocation, which uses stream callback to delete the memory allocated for the stream to avoid singleton.
      
      Also added data_feed_proto to operator to fix CI in CPU compilation
      12542320
    • Y
      Implement the GPU kernel of fc operator (#19687) · a65c728e
      Yiqun Liu 提交于
      * Refine the codes related to fc op.
      
      * Add GPU implementation for fc functor.
      
      * Apply fc_fuse_pass in GPU inference.
      test=develop
      
      * Change the cmake for fc op.
      
      * Change PADDLE_ENFORCE to PADDLE_ENFORCE_EQ.
      
      * Add an attribute to set the activation type in fc_op.
      
      * Enhance the unittest of fc_op.
      test=develop
      
      * Remove the declaration of FCOpGrad back to the header file.
      test=develop
      
      * Set default value for newly added arguments in test_fc_op.
      test=develop
      a65c728e
  34. 05 9月, 2019 3 次提交
  35. 04 9月, 2019 1 次提交