- 19 9月, 2019 3 次提交
-
-
由 TianXiaogang 提交于
* fix: fix model parser and save bug * style: delete debug code * fix: fix light_predictor program run model with subblock bug
-
由 huzhiqiang 提交于
(1)modify tiny publish so to make it excutable (2)modify the bug of compiling in armlinux (3)change 3 full publish so into 2
-
由 sangoly 提交于
-
- 18 9月, 2019 1 次提交
-
-
由 Xiaoyang LI 提交于
* fix gemm_int8, gemv-int8 and conv-int8 math function, add float bias * change conv impl * neon int8 kernel support float bias * arm compute kernel support float bias * add math_test target * add tensor utils for testing, fix sgemm ut error * add gemm_int8 unit test, support float bias * fix build script * add conv compute unit test for arm * fix build script, test=develop * fix fp32 dw conv3x3s1, test=develop * add fp32 dw conv3x3s1, test=develop * add armv7 fp32 dw conv3x3s1, test=develop * add fp32 depthwise conv3x3s2, test=develop * fix fp32 conv3x3 depthwise build error, test=develop * fix gemm_like conv trans weights error, test=develop * fix int8 depthwise conv3x3 error, test=develop * turn on all test for arm fp32 conv, test=develop * fix int8 conv1x1 error * fix int8 direct conv3x3s1 error, test=develop * fix int8 direct conv3x3s2, test=develop * turn on all test for arm int8 conv, test=develop * fix int8 fc error, change mobilenetv1-int8 ground-truth result to fluid, test=develop * remove debug info, strip ut binary, test=develop * fix conv compute error, test=develop * change Init() to ReInitWhenNeeded(), test=develop * fix code style, test=develop * remote engine_test, test=develop * fix building server tests error, test=develop * fix sdot clang build error, test=develop * fix sgemm ut timeout error, test=develop * fix clang build error, test=develop * turn off math basic test due to ci time out, test=develop * fix conv_int8 ut error, test=develop
-
- 17 9月, 2019 1 次提交
-
-
由 sangoly 提交于
* [Cxx API] add build-in version info * update: add version.h.in template
-
- 16 9月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
-
- 13 9月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
-
- 12 9月, 2019 2 次提交
-
-
由 Yan Chunwei 提交于
-
由 Wilber 提交于
* add unsqueeze and range op. modify concat op test=develop * modify exception in range_test_x86
-
- 11 9月, 2019 2 次提交
-
-
由 Yan Chunwei 提交于
-
由 liu zhengxi 提交于
add slice op, reshape op, reshape2 op, squeeze op and squeeze2 op and their unittests for x86
-
- 10 9月, 2019 3 次提交
-
-
由 Wilber 提交于
-
由 juncaipeng 提交于
* add assign_value op, arm kernel and test, add fluid_type, test=develop * add hard_sigmoid, test=develop * use image and new imple to test detection model, delete faster_rcnn_test, test=develop
-
由 Xiaoyang LI 提交于
fix model_optimize_tool error when using host kernel, fix reshape op build error on ios, test=develop (#1984)
-
- 09 9月, 2019 2 次提交
-
-
由 juncaipeng 提交于
* add assign_value op, arm kernel and test, add fluid_type, test=develop * add hard_sigmoid, test=develop
-
由 Pei Yang 提交于
* add nearest_interp_cuda kernel, test=develop * add concat op and elementwise_add op * remove eigen dependency from nearest_interp cuda kernel, test=develop * free cuda pointers, test=develop
-
- 06 9月, 2019 2 次提交
-
-
由 zhupengyang 提交于
test=develop
-
由 Zhaolong Xing 提交于
* paddle lite cuda init can run model with leaky_relu * add the missing file. test=develop * add the load from memory interface. test=develop * refine this pr. fix comments fix ci error test=develop * conv impl fp32: conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu int8: conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output) can run conv+ bias+relu using cxx_api test=develop * move the lite/cuda/math to backends/cuda/math test=develop
-
- 03 9月, 2019 3 次提交
-
-
由 huzhiqiang 提交于
-
由 juncaipeng 提交于
* add ops for faster rcnn * disable test for generate_proposals and roi_align, test=develop * remove .swp file * remove log in tensor slice * finish the unit test for roi_align, test=develop * add box_clip op and fix tensor slice bug * remove add four op twice * rewrite the implement for box_coder and sequence_expand, add faster_rcnn_test, test=develop * fix test bug of box_clip in x86 server, test=develop * rewrite multiclass_nms according to fluid, test=develop * fix param load bug in box_coder and multiclass_nms op, test=develop * fix value transfor error in multiclass_nms, test=develop
-
由 huzhiqiang 提交于
-
- 02 9月, 2019 1 次提交
-
-
由 juncaipeng 提交于
* add ops for faster rcnn * disable test for generate_proposals and roi_align, test=develop * remove .swp file * remove log in tensor slice * finish the unit test for roi_align, test=develop * add box_clip op and fix tensor slice bug * remove add four op twice * rewrite the implement for box_coder and sequence_expand, add faster_rcnn_test, test=develop * fix test bug of box_clip in x86 server, test=develop
-
- 01 9月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
-
- 30 8月, 2019 3 次提交
-
-
由 Pei Yang 提交于
add nearest_interp cuda kernel for Paddle-Lite
-
由 hong19860320 提交于
* [NPU] add NPU supporting for Java API test=develop * [NPU] refine build script for NPU compiling test=develop * [NPU] fix compiling script for NPU test=develop
-
由 Zhen Wang 提交于
* Add precision and persistable attrs for the tensor. And fix cxx light and full api demo. * update precision2string methods. test=develop * move the save logic to the front of the run in mobilenetv1_full_api.cc, test=develop. * add comments for UpdateVarsOfProgram. test=develop
-
- 29 8月, 2019 7 次提交
-
-
由 Wilber 提交于
* add yolo_box_compute cuda * move multiclass_nms(arm) to host * add lod in scale op * add yolo_box_cuda cmake config * modify shuffle_channel_fuse and transpose_softmax_transpose_fuse to support run ssd model. test=develop * reshape and transpose op don't have xshape output. * modify yolo_box_compute_cuda, use tensor to manage cuda memory test=develop * add yolo_box use kernel test=develop
-
由 sangoly 提交于
-
由 liu zhengxi 提交于
-
由 Zhaolong Xing 提交于
* paddle lite cuda init can run model with leaky_relu * add the missing file. test=develop * add the load from memory interface. test=develop * refine this pr. fix comments fix ci error test=develop
-
由 sangoly 提交于
-
由 juncaipeng 提交于
ad ops for faster rcnn, including affine_channel, anchor_generator, generate_proposals and roi_align (#1895) * add ops for faster rcnn * disable test for generate_proposals and roi_align, test=develop * remove .swp file * remove log in tensor slice * finish the unit test for roi_align, test=develop
-
由 tensor-tang 提交于
* add npu script and tester * fix npu armv7 so and refine tests test=develop * update fix and refine log test=develop * refine npu generate api * refine npu subgraph * refine npu gen and clean code * fix model laod * refine node2rm in subgraph * refine the build npu functions test=develop
-
- 28 8月, 2019 4 次提交
-
-
由 huzhiqiang 提交于
-
由 zhupengyang 提交于
* add transpose-softmax-transpose fuse pass test=develop * enable supported lite-npu ops test=develop
-
由 sangoly 提交于
-
由 sangoly 提交于
-
- 27 8月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* paddle lite cuda init can run model with leaky_relu * add the missing file. test=develop
-
- 26 8月, 2019 2 次提交
-
-
由 juncaipeng 提交于