1. 05 11月, 2019 1 次提交
  2. 28 10月, 2019 1 次提交
    • H
      [LITE][XPU] initial support for XPU (#2202) · 06d058fe
      hong19860320 提交于
      * Initial support for XPU
      * Fix compiling errors of XPU
      * Move XPU op kernel bridges from backends to kernels to fix deps order
      * Change the namespace and directory of XPU bridges
      * Add XPU SDK
      * Fix header files and namespace of XPU SDK
      * Add unit tests for relu and conv2d ops
      * Restore the modification of paddle_api_test
      * Supports simple model which contains only a relu layer
      * Add compiling scripts for XPU
      * Fix compiling errors of XPU
      * Add comments for XPU LoadModel and BuildModel
      06d058fe
  3. 11 10月, 2019 1 次提交
    • Y
      [LITE][OPENCL] support image2d type (#2158) · 77cdbdce
      Yuan Shuai 提交于
      * [LITE][OPENCL] support image2d. test=develop
      
      * add context changed with consider image*. test=develop
      
      * add layout, relu image kernels. test=develop
      
      * replace image_data with data, mutable_image_data with mutable_data, test=develop
      
      * comment unused var. test=develop
      
      * remove unused var. test=develop
      77cdbdce
  4. 27 9月, 2019 1 次提交
    • Z
      can run yolov3 fp32 on cuda devices (#2092) · 3d6d744f
      Zhaolong Xing 提交于
      * add conv int8 support(in condition which the input or output channel not be the times of 4)
      add add_kernel for cuda.
      
      * can run yolov3 fp32
      test=develop
      
      * 1. fix bug with yolov3 run
      test=develop
      3d6d744f
  5. 09 9月, 2019 1 次提交
  6. 06 9月, 2019 1 次提交
    • Z
      add cudnn conv fp32, int8 support (#1974) · f3124b30
      Zhaolong Xing 提交于
      * paddle lite cuda init
      can run model with leaky_relu
      
      * add the missing file.
      test=develop
      
      * add the load from memory interface.
      test=develop
      
      * refine this pr. fix comments
      fix ci error
      test=develop
      
      * conv impl
      fp32:
      conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu
      
      int8:
      conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output)
      
      can run conv+ bias+relu using cxx_api
      test=develop
      
      * move the lite/cuda/math to backends/cuda/math
      test=develop
      f3124b30
  7. 16 8月, 2019 1 次提交