1. 03 4月, 2020 1 次提交
  2. 24 3月, 2020 1 次提交
  3. 22 2月, 2020 1 次提交
  4. 16 2月, 2020 1 次提交
  5. 15 2月, 2020 1 次提交
  6. 14 2月, 2020 2 次提交
  7. 13 2月, 2020 1 次提交
  8. 10 2月, 2020 1 次提交
  9. 06 2月, 2020 1 次提交
    • J
      Support weight quantization (#2791) · 6329a9a2
      juncaipeng 提交于
      * optimize quant_dequant_fuse_pass, test=develop
      
      * update, test=develop
      
      * update, test=develop
      
      * fix bug for accessing the removed node, test=develop
      
      * set the bias of int8 conv as float, test=develop
      
      * support weight quantization, test=develop
      
      * up, test=develop
      
      * up, test=develop
      
      * up, test=develop
      6329a9a2
  10. 08 1月, 2020 1 次提交
  11. 10 12月, 2019 1 次提交
    • W
      modify static_kernel_pass to support select the kernel according to input type (#2488) · 7ef0e7fe
      Wilber 提交于
      修改了选kernel的逻辑,默认从模型文件中读取出lod_tensor的data type,在static_kernel_pick pass中如果kernel输入输出的类型与读取的data type完全一致,则选择该Kernel的概率增大。
      
      - 增加 从模型文件__model__读取lod_tensor的data type到cpp::vardesc
      
      - program中增加unordered_map<string, type>字段,并在 Program::PrepareWorkspace中对该字段赋值
      
      - 修改了node.h文件,将const Type* 更改为Type*,并在SSAGraph::Build过程中为符合条件的type*赋值
      
      - static_kernel_pick_pass中添加新规则,如果kernel的输入类型输出类型与__model__中存储的类型的一致,则score*=2。
      
      - 支持模型中用到sequence_reverse_float kernel(输入输出均为float)和sequence_reverse_int64 kernel(输入输出均为int64),能够根据输入输出type选kernel
      7ef0e7fe
  12. 18 11月, 2019 1 次提交
    • Y
      [LITE][OPENCL] Enable full and light api for OpenCL (#2331) · d242bdfb
      Yuan Shuai 提交于
      * Fix bug target for kHost and kARM not equal. test=develop
      
      * Fix license. test=develop
      
      * add debug -g option. test=develop
      
      * enable opencl demo. test=develop
      
      * Fix model_optimize_tool found no opencl kernel. test=develop
      
      * add more vlog. test=develop
      
      * remove macro LITE_WITH_OPENCL, LITE_WITH_FPGA in passes. test=develop
      
      * Fix valid_places in mobilenetv1_test. test=develop
      
      * Fix bug of find no real output of fetch, after tool OPs of optimzer passes. test=develop
      
      * Fix vlog as log message in model_optimize_tool. test=develop
      
      * fix miscs. test=develop
      
      * fix comment. test=develop
      
      * Fix misspell of opencl, fpga kernels name in lite/api/CMakeLists.txt. test=develop
      
      * add opencl macro in full_api of demo. test=develop
      d242bdfb
  13. 23 10月, 2019 1 次提交
  14. 21 10月, 2019 1 次提交
  15. 15 10月, 2019 1 次提交
    • H
      [NPU] Fix and refine the supporting of multi NPU models (#2037) · 7a731b7f
      hong19860320 提交于
      * [NPU] Fix the bug of loading multi NPU models
      test=develop
      
      * [NPU] Use lite tensor to store NPU model, fix the management of multi NPU models, support loading NPU model from memory and reduce the modification of framework
      test=develop
      
      * [NPU] Remove redundant header files for NPU bridges,
      test=develop
      
      * [NPU] fix NPU deps
      test=develop
      
      * [NPU] refine the compiling script for NPU
      test=develop
      
      * [NPU] remove redundant subdirectory in lite/CMakeLists.txt
      test=develop
      
      * [NPU] Fix and refine NPU test case
      test=develop
      
      * [NPU] revoke the modification of other non-NPU modules
      test=develop
      
      * [NPU] Remove NPU bridges if target is tiny publish
      test=develop
      7a731b7f
  16. 19 9月, 2019 2 次提交
  17. 12 9月, 2019 1 次提交
  18. 06 9月, 2019 1 次提交
    • Z
      add cudnn conv fp32, int8 support (#1974) · f3124b30
      Zhaolong Xing 提交于
      * paddle lite cuda init
      can run model with leaky_relu
      
      * add the missing file.
      test=develop
      
      * add the load from memory interface.
      test=develop
      
      * refine this pr. fix comments
      fix ci error
      test=develop
      
      * conv impl
      fp32:
      conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu
      
      int8:
      conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output)
      
      can run conv+ bias+relu using cxx_api
      test=develop
      
      * move the lite/cuda/math to backends/cuda/math
      test=develop
      f3124b30
  19. 03 9月, 2019 1 次提交
  20. 01 9月, 2019 1 次提交
  21. 30 8月, 2019 1 次提交
    • Z
      add precision and persistable attrs for the tensor. (#1899) · e2e07fa4
      Zhen Wang 提交于
      * Add precision and persistable attrs for the tensor. And fix cxx light and full api demo.
      
      * update precision2string methods. test=develop
      
      * move the save logic to the front of the run in mobilenetv1_full_api.cc, test=develop.
      
      * add comments for UpdateVarsOfProgram. test=develop
      e2e07fa4
  22. 29 8月, 2019 4 次提交
  23. 28 8月, 2019 1 次提交
  24. 26 8月, 2019 1 次提交
  25. 22 8月, 2019 1 次提交
  26. 16 8月, 2019 1 次提交