1. 26 9月, 2019 20 次提交
  2. 25 9月, 2019 17 次提交
    • X
      fix memory leak in HogwildWorker (#19956) · f50e701b
      xujiaqi01 提交于
      fix memory leak in HogwildWorker,  whose ops are  explicitly deleted in destructor
      f50e701b
    • Z
      fix buddy_allocator_test, test=develop (#19967) · b8aff5e5
      Zeng Jinle 提交于
      b8aff5e5
    • Z
      Add AdadeltaOptimizer doc (#19875) · 4a5ce4fe
      Zeng Jinle 提交于
      * add AdadeltaOptimizer doc, test=develop
      
      * refine doc,test=develop
      
      * folllow lanxiang's comments, test=develop, test=document_fix
      4a5ce4fe
    • Z
      Expose set_gradient_clip API (#19869) · 7912e6ca
      Zeng Jinle 提交于
      * expose set_gradient_clip, test=develop, test=document_preview, test=preview
      
      * expose gradient clip, test=develop, test=document_fix
      
      * refine doc, test=develop
      
      * follow lanxiang's comments, test=develop, test=document_fix
      7912e6ca
    • C
      refine deformable roi pooling doc (#19944) · 0099e549
      chengjuntao 提交于
      * refine doc, test=develop, test=document_preview
      0099e549
    • Z
      add kernel for fill_op, test=develop (#19719) · b1bb2384
      zhongpu 提交于
      * add kernel for fill_op, test=develop
      
      * modify PADDLE_ENFORCE to PADDLE_ENFORCE_EQ, test=develop
      
      * add op test for fill_op, test=develop
      
      * REGISTER COP CUDA KERNEL, test=develop
      
      * update test_fill_op.py, test=develop
      
      * change FillConstantOpVarTypeInference to FillOpVarTypeInference, test=develop
      
      * fix op test, test=develop
      
      * add head file, test=develop
      b1bb2384
    • W
      add support tensor and tensorlist for strided_slice OP (#19929) · 382d099d
      wangchaochaohu 提交于
      * add support tensor and tensorlist for strided_slice OP test=develop
      
      * fix the commnet test=develop
      
      * fix test=develop
      
      * fix the bug test=develop
      
      * delete log test=develop
      
      * fix API.spec test=develop
      
      * fix test=develop
      382d099d
    • L
      Fix ssdloss num and batch norm format and conv2d (#19754) · fe218df3
      lvmengsi 提交于
      * update API.spec
      fe218df3
    • L
      Fix OpTest of bn (#19062) · 619a241b
      lvmengsi 提交于
      * fix bn
      619a241b
    • S
      Avoid treating broadcast as initialization operation (#19857) · 5920d69d
      ShenLiang 提交于
      * treat broadcast as non-initial, test=develop
      
      * rename the class name
      
      * rename the class name, test=develop
      5920d69d
    • B
      add support of matmul with multiple head even different width and height (#19708) · c670058a
      Bob Zhu 提交于
      * add support of matmul with multiple head even different width and height
      
      Original matmul with multiple head supports only the mat_a.width == mat_b.height,
      in that case, mat_b will be horizontally split. In this patch, we extend the
      support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
      in this case, mab_b will be vertically split.
      
      One example is A is [3, 8], B is [2, 16], head_number is 4. In this
      case, A will be split as [3, 2], B will be (vertically) split as
      [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
      
      test=develop
      
      * add support of matmul with multiple head even different width and height
      
      Original matmul with multiple head supports only the mat_a.width == mat_b.height,
      in that case, mat_b will be horizontally split. In this patch, we extend the
      support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
      in this case, mab_b will be vertically split.
      
      One example is A is [3, 8], B is [2, 16], head_number is 4. In this
      case, A will be split as [3, 2], B will be (vertically) split as
      [2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
      
      test=develop
      
      * refactor the code of matmul with multiple head even different width and height
      
      test=develop
      c670058a
    • L
      refine ctc align op with padding (#19926) · 6884dc80
      Liufang Sang 提交于
      * refine ctc align op with padding 
      * refine api sample code
      6884dc80
    • T
      add input type and dtype check for softmax_op (#19975) · 65a02fc1
      Tao Luo 提交于
      * add input type and dtype check for softmax_op
      
      test=develop
      
      * refine error message
      
      test=develop
      65a02fc1
    • Z
      FIx C++ inference BUG: When open memory optim and enable trt subgraph at the... · e89b1288
      Zhaolong Xing 提交于
      FIx C++ inference BUG: When open memory optim and enable trt subgraph at the same time, there is a bug (#19969)
      
      * fix memory optimization type
      test=develop
      
      * 1. fix BUG: open trt and memory optim will trigger bug.
      2. Clean memory optim bug.
      test=develop
      e89b1288
    • W
      Add support for new QAT models (#18970) · 4286a627
      Wojciech Uss 提交于
      * Add support for new QAT models
      
      test=develop
      Co-Authored-By: NMichał Gallus <michal.gallus@intel.com>
      Co-Authored-By: NWojciech Uss <wojciech.uss@intel.com>
      
      * fixed fps results
      
      test=develop
      
      * fix top5 accuracy drop problem
      
      * updated for new QAT models
      
      * skip quantizing average pooling - dirty but working
      
      * add missing pass
      
      * added missing conv+brelu fuse pass
      
      * removed a call to non-existent pass
      
      test=develop
      
      * renamed pass
      
      test=develop
      
      * Adjust finding pooling scale to newest QAT models
      
      * Remove unnecessary code from quantization_mkldnn_pass
      
      * Copy Pooling input scale to output scale in QAT
      
      * Refactor & remove unused code in QAT
      
      * Incorporate fp32 FC into QAT
      
      test=develop
      
      * Enable graph drawing with debug flag
      
      test=develop
      
      * Add tests for QATv2
      
      * Fix paths for QATv2 models
      
      test=develop
      
      * Add option to save transformed int8 qat model
      
      test=develop
      
      * Remove redundant lines from qat mkldnn pass
      
      test=develop
      
      * Delegate disablement of avg pooling to qat
      
      test=develop
      
      * fix CI bug, test=develop
      
      * Follow Wangzhen's Review, test=develop
      
      * Update API.spec
      
      test=develop
      
      * Name False in (is_unsigned, TensorScale) tuple
      
      test=develop
      4286a627
    • A
      Removing length dims constraints of seq_pad and seq_unpad (#19497) · 99a9615a
      Aurelius84 提交于
      * Removing last dims constraints of seq_pad and seq_unpad test=develop
      
      * fix test_layer api code test=develop
      
      * fix sequence_pad_op.cc conflict test=develop
      
      * remove test_analyzer_mm_dnn test=develop
      
      * fix vectorize bug test=develop
      
      * fix vectorize<int> test=develop
      99a9615a
    • C
      polish multi process warning info (#19961) · cca26f5c
      chengduo 提交于
      test=develop
      cca26f5c
  3. 24 9月, 2019 3 次提交