1. 07 11月, 2019 1 次提交
  2. 06 11月, 2019 2 次提交
  3. 05 11月, 2019 1 次提交
  4. 04 11月, 2019 1 次提交
  5. 28 10月, 2019 1 次提交
    • H
      [LITE][XPU] initial support for XPU (#2202) · 06d058fe
      hong19860320 提交于
      * Initial support for XPU
      * Fix compiling errors of XPU
      * Move XPU op kernel bridges from backends to kernels to fix deps order
      * Change the namespace and directory of XPU bridges
      * Add XPU SDK
      * Fix header files and namespace of XPU SDK
      * Add unit tests for relu and conv2d ops
      * Restore the modification of paddle_api_test
      * Supports simple model which contains only a relu layer
      * Add compiling scripts for XPU
      * Fix compiling errors of XPU
      * Add comments for XPU LoadModel and BuildModel
      06d058fe
  6. 23 10月, 2019 2 次提交
    • X
      remove log in reshape, fix conv error when padding size=4 (#2199) · 06d7a8f5
      Xiaoyang LI 提交于
      * remove log in reshape, fix conv error when padding size=4, test=develop
      
      * fix style, test=develop
      
      * remove useless code, test=develop
      
      * remove redundant model test file, test=develop
      
      * change cluster to power_mode, test=develop
      
      * fix build error, test=develop
      
      * change cluster to power_mode, test=develop
      
      * change opt_nb to use_optimize_nb, test=develop
      
      * null, test=develop
      06d7a8f5
    • L
      Enable pool2d, dropout, transpose and transpose2 op on x86 (#2226) · aa507f9b
      liu zhengxi 提交于
      * enable pool2d op on x86 and add its unit tests, test=develop
      
      * enable dropout op and add its unit tests, test=develop
      
      * add tranpose, transpose2 op and add their unit tests, test=develop
      aa507f9b
  7. 22 10月, 2019 3 次提交
    • J
      Modify parse_op_registry (#2239) · 351a7743
      juncaipeng 提交于
      * modify parse_op_registry. When REGISTER_LITE_OP and op_name not in the same row, it also can obtain op_name, test=develop
      351a7743
    • J
      Optimize quant_dequant (#2215) · f480d474
      juncaipeng 提交于
      * Add DeleteQuantOpFuser
      * Add fake_quantize_dequantize_moving_avg_abs_max_op
      * Add DeleteQuantDequantOpFuser
      f480d474
    • T
      Transformer pr (#2214) · f0a6c1eb
      TianXiaogang 提交于
      * feat: add beam_search_special function for support nlp model
      
      * fix: add beam_search_compute kernel input and output
      
      * feat: add assign op & copy_compute kernel
      
      * feat: add fill_const_batch_size_like op & kernel
      
      * feat: add layer_norm op and kernel and ut
      
      * fix: fix some bugs
          fix mul_op infer_shape bug when x_dim_idx = 2, x_dims.size()=3 & y_dim_idx = 1, y_dims.size()=2
          fix elementwise_compute bug when y axis is all 1
          fix beam_search choose math_func wrong bug
          fix layer_norm get attr bug
          fix fill_constant_batch_size_like shape_set bug
      
      * feat: add gather op and kernel & and transform ut
      
      * feats: add ops and fix bugs to support transformer op
             fix type_cast passes to skip `while`
             fix elementwise infer_shape bug when x.dims=3 and y.dims={1} & axis=0
             fix lookup_table compute bug
             fix read_from_array/beam_search/increment/compate/gather ops data_type problems
      
      * fix:
          transfomer ut add word read inferface
          fix copy/gather/norm/layer_norm include path problem
      
      * fix:debug info
      
      * fix: fix input reshape bug
      
      * fix: fix norm bug
      
      * style: style fix & test=develop
      
      * style: fix operators cmakelist
      
      * style: fix operators cmakelist; test=develop
      
      * fix and test=develop
      
      * fix and test=develop
      
      * style: style fix; test=develop
      f0a6c1eb
  8. 21 10月, 2019 1 次提交
  9. 17 10月, 2019 1 次提交
  10. 15 10月, 2019 1 次提交
    • H
      [NPU] Fix and refine the supporting of multi NPU models (#2037) · 7a731b7f
      hong19860320 提交于
      * [NPU] Fix the bug of loading multi NPU models
      test=develop
      
      * [NPU] Use lite tensor to store NPU model, fix the management of multi NPU models, support loading NPU model from memory and reduce the modification of framework
      test=develop
      
      * [NPU] Remove redundant header files for NPU bridges,
      test=develop
      
      * [NPU] fix NPU deps
      test=develop
      
      * [NPU] refine the compiling script for NPU
      test=develop
      
      * [NPU] remove redundant subdirectory in lite/CMakeLists.txt
      test=develop
      
      * [NPU] Fix and refine NPU test case
      test=develop
      
      * [NPU] revoke the modification of other non-NPU modules
      test=develop
      
      * [NPU] Remove NPU bridges if target is tiny publish
      test=develop
      7a731b7f
  11. 14 10月, 2019 3 次提交
  12. 11 10月, 2019 2 次提交
  13. 27 9月, 2019 1 次提交
    • Z
      can run yolov3 fp32 on cuda devices (#2092) · 3d6d744f
      Zhaolong Xing 提交于
      * add conv int8 support(in condition which the input or output channel not be the times of 4)
      add add_kernel for cuda.
      
      * can run yolov3 fp32
      test=develop
      
      * 1. fix bug with yolov3 run
      test=develop
      3d6d744f
  14. 20 9月, 2019 1 次提交
  15. 19 9月, 2019 2 次提交
  16. 18 9月, 2019 1 次提交
    • X
      fix bias quantize error && fix clang build error (#2049) · 81dffbe8
      Xiaoyang LI 提交于
      * fix gemm_int8, gemv-int8 and conv-int8 math function, add float bias
      
      * change conv impl
      
      * neon int8 kernel support float bias
      
      * arm compute kernel support float bias
      
      * add math_test target
      
      * add tensor utils for testing, fix sgemm ut error
      
      * add gemm_int8 unit test, support float bias
      
      * fix build script
      
      * add conv compute unit test for arm
      
      * fix build script, test=develop
      
      * fix fp32 dw conv3x3s1, test=develop
      
      * add fp32 dw conv3x3s1, test=develop
      
      * add armv7 fp32 dw conv3x3s1, test=develop
      
      * add fp32 depthwise conv3x3s2, test=develop
      
      * fix fp32 conv3x3 depthwise build error, test=develop
      
      * fix gemm_like conv trans weights error, test=develop
      
      * fix int8 depthwise conv3x3 error, test=develop
      
      * turn on all test for arm fp32 conv, test=develop
      
      * fix int8 conv1x1 error
      
      * fix int8 direct conv3x3s1 error, test=develop
      
      * fix int8 direct conv3x3s2, test=develop
      
      * turn on all test for arm int8 conv, test=develop
      
      * fix int8 fc error, change mobilenetv1-int8 ground-truth result to fluid, test=develop
      
      * remove debug info, strip ut binary, test=develop
      
      * fix conv compute error, test=develop
      
      * change Init() to ReInitWhenNeeded(), test=develop
      
      * fix code style, test=develop
      
      * remote engine_test, test=develop
      
      * fix building server tests error, test=develop
      
      * fix sdot clang build error, test=develop
      
      * fix sgemm ut timeout error, test=develop
      
      * fix clang build error, test=develop
      
      * turn off math basic test due to ci time out, test=develop
      
      * fix conv_int8 ut error, test=develop
      81dffbe8
  17. 17 9月, 2019 3 次提交
  18. 16 9月, 2019 1 次提交
    • L
      Gru op (#2002) · eb42f9ee
      lhl960107 提交于
      * add x86 gru&&relu&&sequence_expand_as op test=develop
      eb42f9ee
  19. 12 9月, 2019 2 次提交
  20. 11 9月, 2019 1 次提交
  21. 10 9月, 2019 2 次提交
  22. 09 9月, 2019 1 次提交
  23. 07 9月, 2019 1 次提交
  24. 06 9月, 2019 2 次提交
    • H
      modify reshape2 OP test=dvelop (#1963) · febfd7d6
      huzhiqiang 提交于
      modify reshape2 OP to add shape_tensor input
      febfd7d6
    • Z
      add cudnn conv fp32, int8 support (#1974) · f3124b30
      Zhaolong Xing 提交于
      * paddle lite cuda init
      can run model with leaky_relu
      
      * add the missing file.
      test=develop
      
      * add the load from memory interface.
      test=develop
      
      * refine this pr. fix comments
      fix ci error
      test=develop
      
      * conv impl
      fp32:
      conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu
      
      int8:
      conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output)
      
      can run conv+ bias+relu using cxx_api
      test=develop
      
      * move the lite/cuda/math to backends/cuda/math
      test=develop
      f3124b30
  25. 03 9月, 2019 2 次提交
    • Z
      enhance interpolate op when there is no "scale" (#1957) · 3191ec5e
      zhupengyang 提交于
      test=develop
      3191ec5e
    • J
      rewrite multiclass_nms according to fluid, test=develop (#1945) · deaddf9d
      juncaipeng 提交于
      * add ops for faster rcnn
      
      * disable test for generate_proposals and roi_align, test=develop
      
      * remove .swp file
      
      * remove log in tensor slice
      
      * finish the unit test for roi_align, test=develop
      
      * add box_clip op and fix tensor slice bug
      
      * remove add four op twice
      
      * rewrite the implement for box_coder and sequence_expand, add faster_rcnn_test, test=develop
      
      * fix test bug of box_clip in x86 server, test=develop
      
      * rewrite multiclass_nms according to fluid, test=develop
      
      * fix param load bug in box_coder and multiclass_nms op, test=develop
      
      * fix value transfor error in multiclass_nms, test=develop
      deaddf9d
  26. 02 9月, 2019 1 次提交
    • J
      Add ops and fix bugs for Faster RCNN (#1942) · 635b4958
      juncaipeng 提交于
      * add ops for faster rcnn
      
      * disable test for generate_proposals and roi_align, test=develop
      
      * remove .swp file
      
      * remove log in tensor slice
      
      * finish the unit test for roi_align, test=develop
      
      * add box_clip op and fix tensor slice bug
      
      * remove add four op twice
      
      * rewrite the implement for box_coder and sequence_expand, add faster_rcnn_test, test=develop
      
      * fix test bug of box_clip in x86 server, test=develop
      635b4958