- 24 7月, 2020 1 次提交
-
-
由 Wilber 提交于
-
- 21 10月, 2019 1 次提交
-
-
由 myq406450149 提交于
* add gpu kernel mul pool relu scale softmax dropout bilinear_interp and can run in tx2 * rm GREATER_EQUAL
-
- 16 10月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* init: delete feed and fetch op, using zero copy test=develop * delete the unused test test=develop
-
- 11 10月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* add conv int8 support(in condition which the input or output channel not be the times of 4) add add_kernel for cuda. * can run yolov3 fp32 test=develop * 1. fix bug with yolov3 run test=develop * can run yolov3 int8 test=develop
-
- 27 9月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* add conv int8 support(in condition which the input or output channel not be the times of 4) add add_kernel for cuda. * can run yolov3 fp32 test=develop * 1. fix bug with yolov3 run test=develop
-
- 06 9月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* paddle lite cuda init can run model with leaky_relu * add the missing file. test=develop * add the load from memory interface. test=develop * refine this pr. fix comments fix ci error test=develop * conv impl fp32: conv, conv+bais, conv+bias+relu, conv+bias+leaky_relu int8: conv, conv+bais+relu(int8 or fp32 output), conv+bias+leaky_relu(int8 or fp32 output) can run conv+ bias+relu using cxx_api test=develop * move the lite/cuda/math to backends/cuda/math test=develop
-
- 03 9月, 2019 1 次提交
-
-
由 huzhiqiang 提交于
-
- 16 8月, 2019 1 次提交
-
-
由 Yan Chunwei 提交于
-