1. 30 9月, 2019 6 次提交
  2. 29 9月, 2019 3 次提交
  3. 28 9月, 2019 4 次提交
    • S
      improve op uniform_random, argument shape support tensor and tensor in list (#19786) · f1eebf75
      silingtong123 提交于
      * test=develop, argument shape support tensor and tensor in list
      
      * test=develop,Increasing the coverage of CI tests
      
      * test=develop, modify the document and update API.spec
      
      * test=develop, modify the doc and update API.spec
      
      * test=develop, modify the doc and update API.spec
      
      * test=develop, modify the interface of UniformInitializer
      
      * test=develop, modify the interface of XavierInitializer and MSRAInitializer
      
      * test=develop, modify based on review's comments
      
      * test=develop, modify based on review's comments
      
      *  test=develop, modify based on review's comments
      f1eebf75
    • L
      fix pool2d pool3d,support asymmetric padding and channel_last (#19739) · 24010472
      liym27 提交于
      * fix pool2d pool3d:
      1. support asymmetric padding;
      2. support padding algorithm:"SAME" and "VALID";
      3. support channel_last: data_format NHWC and NDHWC;
      4. support inferring shape when input with negative dims in compile time;
      5. change doc of python API and c++;
      6. fix bug in cuda kernel when Attr(adaptive) is true.
      
      test=develop,test=document_preview
      
      * fix 'tensors' to 'Tensors'. test=develop,test=document_preview
      
      * add test for converage ValueError.test=develop,test=document_preview
      
      * resolve conflict in test_pool2d. test=develop
      24010472
    • A
      Minor GetMKLDNNFormat changes (#20055) · fe581b0e
      Adam 提交于
      test=develop
      fe581b0e
    • L
      fix conv_grad_grad (#20054) · c92348c3
      lvmengsi 提交于
      c92348c3
  4. 27 9月, 2019 6 次提交
  5. 26 9月, 2019 11 次提交
  6. 25 9月, 2019 6 次提交
    • Z
      add kernel for fill_op, test=develop (#19719) · b1bb2384
      zhongpu 提交于
      * add kernel for fill_op, test=develop
      
      * modify PADDLE_ENFORCE to PADDLE_ENFORCE_EQ, test=develop
      
      * add op test for fill_op, test=develop
      
      * REGISTER COP CUDA KERNEL, test=develop
      
      * update test_fill_op.py, test=develop
      
      * change FillConstantOpVarTypeInference to FillOpVarTypeInference, test=develop
      
      * fix op test, test=develop
      
      * add head file, test=develop
      b1bb2384
    • W
      add support tensor and tensorlist for strided_slice OP (#19929) · 382d099d
      wangchaochaohu 提交于
      * add support tensor and tensorlist for strided_slice OP test=develop
      
      * fix the commnet test=develop
      
      * fix test=develop
      
      * fix the bug test=develop
      
      * delete log test=develop
      
      * fix API.spec test=develop
      
      * fix test=develop
      382d099d
    • L
      Fix OpTest of bn (#19062) · 619a241b
      lvmengsi 提交于
      * fix bn
      619a241b
    • B
      add support of matmul with multiple head even different width and height (#19708) · c670058a
      Bob Zhu 提交于
      * add support of matmul with multiple head even different width and height
      
      Original matmul with multiple head supports only the mat_a.width == mat_b.height,
      in that case, mat_b will be horizontally split. In this patch, we extend the
      support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
      in this case, mab_b will be vertically split.
      
      One example is A is [3, 8], B is [2, 16], head_number is 4. In this
      case, A will be split as [3, 2], B will be (vertically) split as
      [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
      
      test=develop
      
      * add support of matmul with multiple head even different width and height
      
      Original matmul with multiple head supports only the mat_a.width == mat_b.height,
      in that case, mat_b will be horizontally split. In this patch, we extend the
      support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
      in this case, mab_b will be vertically split.
      
      One example is A is [3, 8], B is [2, 16], head_number is 4. In this
      case, A will be split as [3, 2], B will be (vertically) split as
      [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
      
      test=develop
      
      * refactor the code of matmul with multiple head even different width and height
      
      test=develop
      c670058a
    • L
      refine ctc align op with padding (#19926) · 6884dc80
      Liufang Sang 提交于
      * refine ctc align op with padding 
      * refine api sample code
      6884dc80
    • A
      Removing length dims constraints of seq_pad and seq_unpad (#19497) · 99a9615a
      Aurelius84 提交于
      * Removing last dims constraints of seq_pad and seq_unpad test=develop
      
      * fix test_layer api code test=develop
      
      * fix sequence_pad_op.cc conflict test=develop
      
      * remove test_analyzer_mm_dnn test=develop
      
      * fix vectorize bug test=develop
      
      * fix vectorize<int> test=develop
      99a9615a
  7. 24 9月, 2019 4 次提交
    • J
      add optimizer:dpsgd,test=develop (#19915) · 766bd529
      jhjiangcs 提交于
      766bd529
    • Y
      Add float16 support to `sync_batch_norm_op` (#19681) · ebff68fa
      Yang Zhang 提交于
      * Add float16 support to `sync_batch_norm_op`
      
      test=develop
      
      * Add test for sync_bn with FP16 input
      
      test=develop
      ebff68fa
    • A
      Remove constraint that last dimension is forced to be 1 by adding lookup_table_v2 (#19735) · 039b9710
      Aurelius84 提交于
      * Remove constraint that last dimension is forced to be 1 by add
      lookup_table_v2 test=develop
      
      * modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop
      
      * Revert "modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop"
      
      This reverts commit 8a960bfc61e51aa27c3c529df8fb90b93ebd19f9.
      
      * move api into fluid.embedding test=develop
      
      * fix example code test=develop
      
      * move one_hot into fluid.one_hot
      
      * modify api.spec test=develop
      
      * fix loss shape test=develop
      039b9710
    • X
      support change shuffle and train thread num (#19841) · cedc0477
      xujiaqi01 提交于
      * support change shuffle thread num
      * support change train thread num
      * fix receive shuffle data of each channel
      * data norm stop gradient
      * add check thread_tensor type and root_tensor type when merge metric
      * remove sleep in shuffle, add config
      * add config of pslib client to client communication
      * fix xbox str
      * add data norm op testcase
      * add flush in trainer finalize
      cedc0477