- 03 4月, 2020 1 次提交
-
-
由 huzhiqiang 提交于
-
- 24 3月, 2020 1 次提交
-
-
由 cc 提交于
-
- 22 2月, 2020 1 次提交
-
-
由 yiicy 提交于
fix tiny_publish link error when open LOG
-
- 16 2月, 2020 1 次提交
-
-
由 huzhiqiang 提交于
-
- 15 2月, 2020 1 次提交
-
-
由 huzhiqiang 提交于
-
- 14 2月, 2020 2 次提交
-
-
由 huzhiqiang 提交于
-
由 huzhiqiang 提交于
-
- 13 2月, 2020 1 次提交
-
-
由 huzhiqiang 提交于
-
- 10 2月, 2020 1 次提交
-
-
由 huzhiqiang 提交于
-
- 06 2月, 2020 1 次提交
-
-
由 juncaipeng 提交于
* optimize quant_dequant_fuse_pass, test=develop * update, test=develop * update, test=develop * fix bug for accessing the removed node, test=develop * set the bias of int8 conv as float, test=develop * support weight quantization, test=develop * up, test=develop * up, test=develop * up, test=develop
-
- 08 1月, 2020 1 次提交
-
-
由 huzhiqiang 提交于
* fix the issue that: loading model consumes too much time test=decelop
-
- 10 12月, 2019 1 次提交
-
-
由 Wilber 提交于
修改了选kernel的逻辑,默认从模型文件中读取出lod_tensor的data type,在static_kernel_pick pass中如果kernel输入输出的类型与读取的data type完全一致,则选择该Kernel的概率增大。 - 增加 从模型文件__model__读取lod_tensor的data type到cpp::vardesc - program中增加unordered_map<string, type>字段,并在 Program::PrepareWorkspace中对该字段赋值 - 修改了node.h文件,将const Type* 更改为Type*,并在SSAGraph::Build过程中为符合条件的type*赋值 - static_kernel_pick_pass中添加新规则,如果kernel的输入类型输出类型与__model__中存储的类型的一致,则score*=2。 - 支持模型中用到sequence_reverse_float kernel(输入输出均为float)和sequence_reverse_int64 kernel(输入输出均为int64),能够根据输入输出type选kernel
-
- 18 11月, 2019 1 次提交
-
-
由 Yuan Shuai 提交于
* Fix bug target for kHost and kARM not equal. test=develop * Fix license. test=develop * add debug -g option. test=develop * enable opencl demo. test=develop * Fix model_optimize_tool found no opencl kernel. test=develop * add more vlog. test=develop * remove macro LITE_WITH_OPENCL, LITE_WITH_FPGA in passes. test=develop * Fix valid_places in mobilenetv1_test. test=develop * Fix bug of find no real output of fetch, after tool OPs of optimzer passes. test=develop * Fix vlog as log message in model_optimize_tool. test=develop * fix miscs. test=develop * fix comment. test=develop * Fix misspell of opencl, fpga kernels name in lite/api/CMakeLists.txt. test=develop * add opencl macro in full_api of demo. test=develop
-
- 23 10月, 2019 1 次提交
-
-
由 石晓伟 提交于
* update framework.proto * add compatibility check, test=develop * remove head files, test=develop
-
- 21 10月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
Fix ‘Large memory usage of Naive model loading’ (#2175)
-
- 15 10月, 2019 1 次提交
-
-
由 hong19860320 提交于
* [NPU] Fix the bug of loading multi NPU models test=develop * [NPU] Use lite tensor to store NPU model, fix the management of multi NPU models, support loading NPU model from memory and reduce the modification of framework test=develop * [NPU] Remove redundant header files for NPU bridges, test=develop * [NPU] fix NPU deps test=develop * [NPU] refine the compiling script for NPU test=develop * [NPU] remove redundant subdirectory in lite/CMakeLists.txt test=develop * [NPU] Fix and refine NPU test case test=develop * [NPU] revoke the modification of other non-NPU modules test=develop * [NPU] Remove NPU bridges if target is tiny publish test=develop
-
- 19 9月, 2019 2 次提交
-
-
由 石晓伟 提交于
* add full_api_static target and fix building errors, test=develop * fix build errors, test=develop * fix code style, test=develop * fix lite/model_parser/pb/var_desc.cc, test=develop * fix building errors, test=develop * modify lite/tools/debug/CMakeLists.txt, test=develop
-
由 TianXiaogang 提交于
* fix: fix model parser and save bug * style: delete debug code * fix: fix light_predictor program run model with subblock bug
-
- 12 9月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
add math function: lstm and selected_rows into lite/x86/math add selected_rows and rw_lock into lite/fluid add lstm_cpu_kernel and lstm_kernel into lite/x86/detail
-
- 06 9月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* paddle lite cuda init can run model with leaky_relu * add the missing file. test=develop * add the load from memory interface. test=develop * refine this pr. fix comments fix ci error test=develop * conv impl fp32: conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu int8: conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output) can run conv+ bias+relu using cxx_api test=develop * move the lite/cuda/math to backends/cuda/math test=develop
-
- 03 9月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
-
- 01 9月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
-
- 30 8月, 2019 1 次提交
-
-
由 Zhen Wang 提交于
* Add precision and persistable attrs for the tensor. And fix cxx light and full api demo. * update precision2string methods. test=develop * move the save logic to the front of the run in mobilenetv1_full_api.cc, test=develop. * add comments for UpdateVarsOfProgram. test=develop
-
- 29 8月, 2019 4 次提交
-
-
由 tensor-tang 提交于
-
由 sangoly 提交于
-
由 Zhaolong Xing 提交于
* paddle lite cuda init can run model with leaky_relu * add the missing file. test=develop * add the load from memory interface. test=develop * refine this pr. fix comments fix ci error test=develop
-
由 tensor-tang 提交于
* add npu script and tester * fix npu armv7 so and refine tests test=develop * update fix and refine log test=develop * refine npu generate api * refine npu subgraph * refine npu gen and clean code * fix model laod * refine node2rm in subgraph * refine the build npu functions test=develop
-
- 28 8月, 2019 1 次提交
-
-
由 sangoly 提交于
-
- 26 8月, 2019 1 次提交
-
-
由 sangoly 提交于
-
- 22 8月, 2019 1 次提交
-
-
由 Yan Chunwei 提交于
-
- 16 8月, 2019 1 次提交
-
-
由 Yan Chunwei 提交于
-