1. 06 11月, 2019 1 次提交
  2. 05 11月, 2019 1 次提交
  3. 04 11月, 2019 1 次提交
  4. 01 11月, 2019 2 次提交
  5. 30 10月, 2019 2 次提交
  6. 29 10月, 2019 5 次提交
  7. 28 10月, 2019 2 次提交
    • Z
      [XPU] add elementwise, pool, softmax op bridges and unit tests (#2264) · eed7a506
      zhupengyang 提交于
      test=develop
      eed7a506
    • H
      [LITE][XPU] initial support for XPU (#2202) · 06d058fe
      hong19860320 提交于
      * Initial support for XPU
      * Fix compiling errors of XPU
      * Move XPU op kernel bridges from backends to kernels to fix deps order
      * Change the namespace and directory of XPU bridges
      * Add XPU SDK
      * Fix header files and namespace of XPU SDK
      * Add unit tests for relu and conv2d ops
      * Restore the modification of paddle_api_test
      * Supports simple model which contains only a relu layer
      * Add compiling scripts for XPU
      * Fix compiling errors of XPU
      * Add comments for XPU LoadModel and BuildModel
      06d058fe
  8. 24 10月, 2019 1 次提交
    • L
      Make inceptionv4, resnet50, googlenet can run on x86 paltform (#2250) · edb4ea9a
      liu zhengxi 提交于
      * make inceptionv4, resnet50, googlenet can run on x86 paltform and fix the compare part in x86 unittests, test=develop
      
      * fix googlenet tests for benchmark record, test=develop
      
      * [framework][profile] fix profile dump bug when op is feed and fetch test=develop (sangoly)
      edb4ea9a
  9. 23 10月, 2019 5 次提交
  10. 22 10月, 2019 1 次提交
    • T
      Transformer pr (#2214) · f0a6c1eb
      TianXiaogang 提交于
      * feat: add beam_search_special function for support nlp model
      
      * fix: add beam_search_compute kernel input and output
      
      * feat: add assign op & copy_compute kernel
      
      * feat: add fill_const_batch_size_like op & kernel
      
      * feat: add layer_norm op and kernel and ut
      
      * fix: fix some bugs
          fix mul_op infer_shape bug when x_dim_idx = 2, x_dims.size()=3 & y_dim_idx = 1, y_dims.size()=2
          fix elementwise_compute bug when y axis is all 1
          fix beam_search choose math_func wrong bug
          fix layer_norm get attr bug
          fix fill_constant_batch_size_like shape_set bug
      
      * feat: add gather op and kernel & and transform ut
      
      * feats: add ops and fix bugs to support transformer op
             fix type_cast passes to skip `while`
             fix elementwise infer_shape bug when x.dims=3 and y.dims={1} & axis=0
             fix lookup_table compute bug
             fix read_from_array/beam_search/increment/compate/gather ops data_type problems
      
      * fix:
          transfomer ut add word read inferface
          fix copy/gather/norm/layer_norm include path problem
      
      * fix:debug info
      
      * fix: fix input reshape bug
      
      * fix: fix norm bug
      
      * style: style fix & test=develop
      
      * style: fix operators cmakelist
      
      * style: fix operators cmakelist; test=develop
      
      * fix and test=develop
      
      * fix and test=develop
      
      * style: style fix; test=develop
      f0a6c1eb
  11. 21 10月, 2019 2 次提交
  12. 18 10月, 2019 1 次提交
  13. 17 10月, 2019 4 次提交
  14. 16 10月, 2019 1 次提交
  15. 15 10月, 2019 2 次提交
    • Y
      [LITE][OPENCL] Fix layout, target pass for OpenCL, add macro of... · 72c11758
      Yuan Shuai 提交于
      [LITE][OPENCL] Fix layout, target pass for OpenCL, add macro of CONVERT_TYPE_TO and READ/WRITE image, memory reuse in ResetLazyImage2D (#2170)
      
      * add macro of CONVERT_TYPE_TO and READ/WRITE image. test=develop
      
      * add data type control. test=develop
      
      * fix io op as general layout and precision. test=develop
      
      * Fix memory reuse strategy for opencl image2d. test=develop
      
      * remove std::array, std::map in about opencl backend. test=develop
      72c11758
    • H
      [NPU] Fix and refine the supporting of multi NPU models (#2037) · 7a731b7f
      hong19860320 提交于
      * [NPU] Fix the bug of loading multi NPU models
      test=develop
      
      * [NPU] Use lite tensor to store NPU model, fix the management of multi NPU models, support loading NPU model from memory and reduce the modification of framework
      test=develop
      
      * [NPU] Remove redundant header files for NPU bridges,
      test=develop
      
      * [NPU] fix NPU deps
      test=develop
      
      * [NPU] refine the compiling script for NPU
      test=develop
      
      * [NPU] remove redundant subdirectory in lite/CMakeLists.txt
      test=develop
      
      * [NPU] Fix and refine NPU test case
      test=develop
      
      * [NPU] revoke the modification of other non-NPU modules
      test=develop
      
      * [NPU] Remove NPU bridges if target is tiny publish
      test=develop
      7a731b7f
  16. 14 10月, 2019 3 次提交
  17. 12 10月, 2019 1 次提交
  18. 11 10月, 2019 3 次提交
    • J
      add rsqrt op, test=develop (#2176) · dfce4621
      juncaipeng 提交于
      dfce4621
    • Z
      CUDA: can run yolov3 int8 (#2172) · 7931104f
      Zhaolong Xing 提交于
      * add conv int8 support(in condition which the input or output channel not be the times of 4)
      add add_kernel for cuda.
      
      * can run yolov3 fp32
      test=develop
      
      * 1. fix bug with yolov3 run
      test=develop
      
      * can run yolov3 int8 test=develop
      7931104f
    • Y
      [LITE][OPENCL] support image2d type (#2158) · 77cdbdce
      Yuan Shuai 提交于
      * [LITE][OPENCL] support image2d. test=develop
      
      * add context changed with consider image*. test=develop
      
      * add layout, relu image kernels. test=develop
      
      * replace image_data with data, mutable_image_data with mutable_data, test=develop
      
      * comment unused var. test=develop
      
      * remove unused var. test=develop
      77cdbdce
  19. 10 10月, 2019 1 次提交
  20. 09 10月, 2019 1 次提交
    • Y
      improve dw conv performance · 4b9df8fb
      yiicy 提交于
      *  imporve prepack_input func speed in int8 3x3s1 dw conv
      
      * fix code style
      
      * fix code style
      
      * improve 3x3s1 dw fp32 conv speed a little
      
      * arm add 5x5s1 int8 dw conv, test=develop
      4b9df8fb