1. 14 11月, 2019 1 次提交
  2. 05 11月, 2019 1 次提交
  3. 28 10月, 2019 1 次提交
    • H
      [LITE][XPU] initial support for XPU (#2202) · 06d058fe
      hong19860320 提交于
      * Initial support for XPU
      * Fix compiling errors of XPU
      * Move XPU op kernel bridges from backends to kernels to fix deps order
      * Change the namespace and directory of XPU bridges
      * Add XPU SDK
      * Fix header files and namespace of XPU SDK
      * Add unit tests for relu and conv2d ops
      * Restore the modification of paddle_api_test
      * Supports simple model which contains only a relu layer
      * Add compiling scripts for XPU
      * Fix compiling errors of XPU
      * Add comments for XPU LoadModel and BuildModel
      06d058fe
  4. 23 10月, 2019 1 次提交
  5. 22 10月, 2019 2 次提交
    • J
      Optimize quant_dequant (#2215) · f480d474
      juncaipeng 提交于
      * Add DeleteQuantOpFuser
      * Add fake_quantize_dequantize_moving_avg_abs_max_op
      * Add DeleteQuantDequantOpFuser
      f480d474
    • T
      Transformer pr (#2214) · f0a6c1eb
      TianXiaogang 提交于
      * feat: add beam_search_special function for support nlp model
      
      * fix: add beam_search_compute kernel input and output
      
      * feat: add assign op & copy_compute kernel
      
      * feat: add fill_const_batch_size_like op & kernel
      
      * feat: add layer_norm op and kernel and ut
      
      * fix: fix some bugs
          fix mul_op infer_shape bug when x_dim_idx = 2, x_dims.size()=3 & y_dim_idx = 1, y_dims.size()=2
          fix elementwise_compute bug when y axis is all 1
          fix beam_search choose math_func wrong bug
          fix layer_norm get attr bug
          fix fill_constant_batch_size_like shape_set bug
      
      * feat: add gather op and kernel & and transform ut
      
      * feats: add ops and fix bugs to support transformer op
             fix type_cast passes to skip `while`
             fix elementwise infer_shape bug when x.dims=3 and y.dims={1} & axis=0
             fix lookup_table compute bug
             fix read_from_array/beam_search/increment/compate/gather ops data_type problems
      
      * fix:
          transfomer ut add word read inferface
          fix copy/gather/norm/layer_norm include path problem
      
      * fix:debug info
      
      * fix: fix input reshape bug
      
      * fix: fix norm bug
      
      * style: style fix & test=develop
      
      * style: fix operators cmakelist
      
      * style: fix operators cmakelist; test=develop
      
      * fix and test=develop
      
      * fix and test=develop
      
      * style: style fix; test=develop
      f0a6c1eb
  6. 17 10月, 2019 1 次提交
  7. 15 10月, 2019 1 次提交
    • H
      [NPU] Fix and refine the supporting of multi NPU models (#2037) · 7a731b7f
      hong19860320 提交于
      * [NPU] Fix the bug of loading multi NPU models
      test=develop
      
      * [NPU] Use lite tensor to store NPU model, fix the management of multi NPU models, support loading NPU model from memory and reduce the modification of framework
      test=develop
      
      * [NPU] Remove redundant header files for NPU bridges,
      test=develop
      
      * [NPU] fix NPU deps
      test=develop
      
      * [NPU] refine the compiling script for NPU
      test=develop
      
      * [NPU] remove redundant subdirectory in lite/CMakeLists.txt
      test=develop
      
      * [NPU] Fix and refine NPU test case
      test=develop
      
      * [NPU] revoke the modification of other non-NPU modules
      test=develop
      
      * [NPU] Remove NPU bridges if target is tiny publish
      test=develop
      7a731b7f
  8. 14 10月, 2019 3 次提交
  9. 11 10月, 2019 1 次提交
    • Z
      CUDA: can run yolov3 int8 (#2172) · 7931104f
      Zhaolong Xing 提交于
      * add conv int8 support(in condition which the input or output channel not be the times of 4)
      add add_kernel for cuda.
      
      * can run yolov3 fp32
      test=develop
      
      * 1. fix bug with yolov3 run
      test=develop
      
      * can run yolov3 int8 test=develop
      7931104f
  10. 18 9月, 2019 1 次提交
    • X
      fix bias quantize error && fix clang build error (#2049) · 81dffbe8
      Xiaoyang LI 提交于
      * fix gemm_int8, gemv-int8 and conv-int8 math function, add float bias
      
      * change conv impl
      
      * neon int8 kernel support float bias
      
      * arm compute kernel support float bias
      
      * add math_test target
      
      * add tensor utils for testing, fix sgemm ut error
      
      * add gemm_int8 unit test, support float bias
      
      * fix build script
      
      * add conv compute unit test for arm
      
      * fix build script, test=develop
      
      * fix fp32 dw conv3x3s1, test=develop
      
      * add fp32 dw conv3x3s1, test=develop
      
      * add armv7 fp32 dw conv3x3s1, test=develop
      
      * add fp32 depthwise conv3x3s2, test=develop
      
      * fix fp32 conv3x3 depthwise build error, test=develop
      
      * fix gemm_like conv trans weights error, test=develop
      
      * fix int8 depthwise conv3x3 error, test=develop
      
      * turn on all test for arm fp32 conv, test=develop
      
      * fix int8 conv1x1 error
      
      * fix int8 direct conv3x3s1 error, test=develop
      
      * fix int8 direct conv3x3s2, test=develop
      
      * turn on all test for arm int8 conv, test=develop
      
      * fix int8 fc error, change mobilenetv1-int8 ground-truth result to fluid, test=develop
      
      * remove debug info, strip ut binary, test=develop
      
      * fix conv compute error, test=develop
      
      * change Init() to ReInitWhenNeeded(), test=develop
      
      * fix code style, test=develop
      
      * remote engine_test, test=develop
      
      * fix building server tests error, test=develop
      
      * fix sdot clang build error, test=develop
      
      * fix sgemm ut timeout error, test=develop
      
      * fix clang build error, test=develop
      
      * turn off math basic test due to ci time out, test=develop
      
      * fix conv_int8 ut error, test=develop
      81dffbe8
  11. 17 9月, 2019 2 次提交
  12. 16 9月, 2019 1 次提交
    • L
      Gru op (#2002) · eb42f9ee
      lhl960107 提交于
      * add x86 gru&&relu&&sequence_expand_as op test=develop
      eb42f9ee
  13. 12 9月, 2019 2 次提交
  14. 10 9月, 2019 1 次提交
  15. 09 9月, 2019 1 次提交
  16. 07 9月, 2019 1 次提交
  17. 06 9月, 2019 1 次提交
    • Z
      add cudnn conv fp32, int8 support (#1974) · f3124b30
      Zhaolong Xing 提交于
      * paddle lite cuda init
      can run model with leaky_relu
      
      * add the missing file.
      test=develop
      
      * add the load from memory interface.
      test=develop
      
      * refine this pr. fix comments
      fix ci error
      test=develop
      
      * conv impl
      fp32:
      conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu
      
      int8:
      conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output)
      
      can run conv+ bias+relu using cxx_api
      test=develop
      
      * move the lite/cuda/math to backends/cuda/math
      test=develop
      f3124b30
  18. 03 9月, 2019 1 次提交
    • J
      rewrite multiclass_nms according to fluid, test=develop (#1945) · deaddf9d
      juncaipeng 提交于
      * add ops for faster rcnn
      
      * disable test for generate_proposals and roi_align, test=develop
      
      * remove .swp file
      
      * remove log in tensor slice
      
      * finish the unit test for roi_align, test=develop
      
      * add box_clip op and fix tensor slice bug
      
      * remove add four op twice
      
      * rewrite the implement for box_coder and sequence_expand, add faster_rcnn_test, test=develop
      
      * fix test bug of box_clip in x86 server, test=develop
      
      * rewrite multiclass_nms according to fluid, test=develop
      
      * fix param load bug in box_coder and multiclass_nms op, test=develop
      
      * fix value transfor error in multiclass_nms, test=develop
      deaddf9d
  19. 02 9月, 2019 1 次提交
    • J
      Add ops and fix bugs for Faster RCNN (#1942) · 635b4958
      juncaipeng 提交于
      * add ops for faster rcnn
      
      * disable test for generate_proposals and roi_align, test=develop
      
      * remove .swp file
      
      * remove log in tensor slice
      
      * finish the unit test for roi_align, test=develop
      
      * add box_clip op and fix tensor slice bug
      
      * remove add four op twice
      
      * rewrite the implement for box_coder and sequence_expand, add faster_rcnn_test, test=develop
      
      * fix test bug of box_clip in x86 server, test=develop
      635b4958
  20. 29 8月, 2019 2 次提交
  21. 28 8月, 2019 1 次提交
  22. 26 8月, 2019 1 次提交
    • W
      Add matmul op (#1837) · b35e89d6
      Wilber 提交于
      * test=develop add matmul_op
      
      * use lite::arm::math::sgemm func to implement matmul
      
      * test=develop  pre-commit command to run clang-format
      
      * Revert "test=develop  pre-commit command to run clang-format"
      
      This reverts commit 3f56474f.
      
      * test=develop pre-commit command to run clang-format
      b35e89d6
  23. 23 8月, 2019 1 次提交
    • T
      feat: (#1836) · 9388dace
      TianXiaogang 提交于
      add model_run_test_image
          add range_max_quant op
          add flatten op
          add flatten2 op
      fix:
          fix density_prior_box density_size type from float to int
          fix prior_box and density_prior_box some check for get_attr
      test=develop
      9388dace
  24. 22 8月, 2019 1 次提交
  25. 16 8月, 2019 1 次提交