1. 06 9月, 2019 1 次提交
    • Z
      add cudnn conv fp32, int8 support (#1974) · f3124b30
      Zhaolong Xing 提交于
      * paddle lite cuda init
      can run model with leaky_relu
      
      * add the missing file.
      test=develop
      
      * add the load from memory interface.
      test=develop
      
      * refine this pr. fix comments
      fix ci error
      test=develop
      
      * conv impl
      fp32:
      conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu
      
      int8:
      conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output)
      
      can run conv+ bias+relu using cxx_api
      test=develop
      
      * move the lite/cuda/math to backends/cuda/math
      test=develop
      f3124b30
  2. 29 8月, 2019 1 次提交
    • W
      Add yolo_box_cuda multiclass_nms_host kernel. (#1908) · de43e479
      Wilber 提交于
      * add yolo_box_compute cuda
      
      * move multiclass_nms(arm) to host
      
      * add lod in scale op
      
      * add yolo_box_cuda cmake config
      
      * modify shuffle_channel_fuse and transpose_softmax_transpose_fuse to support run ssd model. test=develop
      
      * reshape and transpose op don't have xshape output.
      
      * modify yolo_box_compute_cuda, use tensor to manage cuda memory test=develop
      
      * add yolo_box use kernel test=develop
      de43e479