未验证 提交 dddf058b 编写于 作者: H HappyAngel 提交者: GitHub

Merge pull request #107 from PaddlePaddle/develop

pull code
......@@ -63,6 +63,16 @@ test/models/
test/images/
*.pyc
# model
*.nb
*.svg
*.dot
# vim intermediate files
*.swp
# Emacs intermediate files
*~
......
......@@ -97,7 +97,7 @@ lite_option(LITE_WITH_FPGA "Enable FPGA support in lite" OFF)
lite_option(LITE_WITH_LIGHT_WEIGHT_FRAMEWORK "Enable light-weight framework" OFF)
lite_option(LITE_WITH_PROFILE "Enable profile mode in lite framework" OFF)
lite_option(LITE_WITH_PRECISION_PROFILE "Enable precision profile in profile mode ON in lite" OFF)
lite_option(LITE_SHUTDOWN_LOG "Shutdown log system or not." OFF)
lite_option(LITE_WITH_LOG "Enable log printing or not." ON)
lite_option(LITE_ON_TINY_PUBLISH "Publish tiny predictor lib." OFF)
lite_option(LITE_ON_MODEL_OPTIMIZE_TOOL "Build the model optimize tool" OFF)
# publish options
......
......@@ -61,7 +61,8 @@ For demands of Apple's GPU Metal and web front end inference, please see `./meta
Paddle Lite has referenced the following open-source projects:
- [ARM compute library](http://agroup.baidu.com/paddle-infer/md/article/%28https://github.com/ARM-software/ComputeLibrary%29)
- [Anakin](https://github.com/PaddlePaddle/Anakin). The optimizations under Anakin has been incorporated into Paddle Lite, and so there will not be any future updates of Anakin. As another high-performance inference project under PaddlePaddle, Anakin has been forward-looking and helpful to the making of Paddle Lite.
- [Anakin](https://github.com/PaddlePaddle/Anakin). The optimizations under Anakin has been incorporated into Paddle Lite, and so there will not be any future updates of Anakin. As another high-performance inference project under PaddlePaddle, Anakin has been forward-looking and helpful to the making of Paddle Lite.
## Feedback and Community Support
......
......@@ -186,8 +186,8 @@ if (LITE_WITH_LIGHT_WEIGHT_FRAMEWORK)
add_definitions("-DLITE_WITH_LIGHT_WEIGHT_FRAMEWORK")
endif()
if (LITE_SHUTDOWN_LOG)
add_definitions("-DLITE_SHUTDOWN_LOG")
if (LITE_WITH_LOG)
add_definitions("-DLITE_WITH_LOG")
endif()
if (LITE_ON_TINY_PUBLISH)
......
......@@ -32,34 +32,3 @@ endif()
message(STATUS "APU_DDK_INC: ${APU_DDK_INC}")
include_directories("${APU_DDK_ROOT}/include")
set(APU_SUB_LIB_PATH "lib64")
if(ARM_TARGET_ARCH_ABI STREQUAL "armv8")
set(APU_SUB_LIB_PATH "lib64")
endif()
find_library(APU_NEURON_FILE NAMES neuron
PATHS ${APU_DDK_ROOT}/${APU_SUB_LIB_PATH})
find_library(APU_NEURON_ADAPTER_FILE NAMES neuron_adapter
PATHS ${APU_DDK_ROOT}/${APU_SUB_LIB_PATH})
if(NOT APU_NEURON_FILE)
message(FATAL_ERROR "Can not find APU_NEURON_FILE in ${APU_DDK_ROOT}")
else()
message(STATUS "Found APU NEURON Library: ${APU_NEURON_FILE}")
add_library(apu_neuron SHARED IMPORTED GLOBAL)
set_property(TARGET apu_neuron PROPERTY IMPORTED_LOCATION ${APU_NEURON_FILE})
endif()
if(NOT APU_NEURON_ADAPTER_FILE)
message(FATAL_ERROR "Can not find APU_NEURON_ADAPTER_FILE in ${APU_DDK_ROOT}")
else()
message(STATUS "Found APU NEURON ADAPTER Library: ${APU_NEURON_ADAPTER_FILE}")
add_library(apu_neuron_adapter SHARED IMPORTED GLOBAL)
set_property(TARGET apu_neuron_adapter PROPERTY IMPORTED_LOCATION ${APU_NEURON_ADAPTER_FILE})
endif()
set(apu_runtime_libs apu_neuron apu_neuron_adapter CACHE INTERNAL "apu runtime libs")
message(STATUS "${apu_runtime_libs}")
......@@ -45,7 +45,7 @@ else()
# we changed the source code to adapt for windows compiling
# git diffs : (1) unsupported/Eigen/CXX11/src/Tensor/TensorBlockV2.h
######################################################################################################
URL https://paddlelite-data.bj.bcebos.com/third_party_libs/eigen-git-mirror-master-9ab917e9db99f5907d086aa73d5f9103.zip
URL http://paddlelite-data.bj.bcebos.com/third_party_libs/eigen-git-mirror-master-9ab917e9db99f5907d086aa73d5f9103.zip
DOWNLOAD_DIR ${EIGEN_SOURCECODE_DIR}
DOWNLOAD_NO_PROGRESS 1
PREFIX ${EIGEN_SOURCE_DIR}
......
......@@ -400,7 +400,7 @@ std::shared_ptr<PaddlePredictor> predictor = CreatePaddlePredictor<MobileConfig>
- `None`
返回:内存中模型结构数据
返回:内存中模型参数数据
返回类型:`const std::string&`
......
# PaddleLite使用百度XPU预测部署
Paddle Lite已支持百度XPU在x86和arm服务器(例如飞腾 FT-2000+/64)上进行预测部署。
目前支持Kernel和子图两种接入方式,其中子图接入方式与之前华为NPU类似,即加载并分析Paddle模型,将Paddle算子转成XTCL组网API进行网络构建,在线生成并执行模型。
## 支持现状
### 已支持的芯片
- 昆仑818-100(推理芯片)
- 昆仑818-300(训练芯片)
### 已支持的设备
- K100/K200昆仑AI加速卡
### 已支持的Paddle模型
- [ResNet50](https://paddlelite-demo.bj.bcebos.com/models/resnet50_fp32_224_fluid.tar.gz)
- [BERT](https://paddlelite-demo.bj.bcebos.com/models/bert_fp32_fluid.tar.gz)
- [ERNIE](https://paddlelite-demo.bj.bcebos.com/models/ernie_fp32_fluid.tar.gz)
- YOLOv3
- Mask R-CNN
- Faster R-CNN
- UNet
- SENet
- SSD
- 百度内部业务模型(由于涉密,不方便透露具体细节)
### 已支持(或部分支持)的Paddle算子(Kernel接入方式)
- scale
- relu
- tanh
- sigmoid
- stack
- matmul
- pool2d
- slice
- lookup_table
- elementwise_add
- elementwise_sub
- cast
- batch_norm
- mul
- layer_norm
- softmax
- conv2d
- io_copy
- io_copy_once
- __xpu__fc
- __xpu__multi_encoder
- __xpu__resnet50
- __xpu__embedding_with_eltwise_add
### 已支持(或部分支持)的Paddle算子(子图/XTCL接入方式)
- relu
- tanh
- conv2d
- depthwise_conv2d
- elementwise_add
- pool2d
- softmax
- mul
- batch_norm
- stack
- gather
- scale
- lookup_table
- slice
- transpose
- transpose2
- reshape
- reshape2
- layer_norm
- gelu
- dropout
- matmul
- cast
- yolo_box
## 参考示例演示
### 测试设备(K100昆仑AI加速卡)
![baidu_xpu](https://paddlelite-demo.bj.bcebos.com/devices/baidu/baidu_xpu.jpg)
### 准备设备环境
- K100/200昆仑AI加速卡[规格说明书](https://paddlelite-demo.bj.bcebos.com/devices/baidu/K100_K200_spec.pdf),如需更详细的规格说明书或购买产品,请联系欧阳剑ouyangjian@baidu.com;
- K100为全长半高PCI-E卡,K200为全长全高PCI-E卡,要求使用PCI-E x16插槽,且需要单独的8针供电线进行供电;
- 安装K100/K200驱动,目前支持Ubuntu和CentOS系统,由于驱动依赖Linux kernel版本,请正确安装对应版本的驱动安装包。
### 准备本地编译环境
- 为了保证编译环境一致,建议参考[源码编译](../user_guides/source_compile)中的Linux开发环境进行配置;
- 由于编译示例程序需要依赖OpenCV和CMake 3.10.3,请执行如下命令进行安装;
```shell
$ sudo apt-get update
$ sudo apt-get install gcc g++ make wget unzip libopencv-dev pkg-config
$ wget https://www.cmake.org/files/v3.10/cmake-3.10.3.tar.gz
$ tar -zxvf cmake-3.10.3.tar.gz
$ cd cmake-3.10.3
$ ./configure
$ make
$ sudo make install
```
### 运行图像分类示例程序
-[https://paddlelite-demo.bj.bcebos.com/devices/baidu/PaddleLite-linux-demo.tar.gz](https://paddlelite-demo.bj.bcebos.com/devices/baidu/PaddleLite-linux-demo.tar.gz)下载示例程序,解压后清单如下:
```shell
- PaddleLite-linux-demo
- image_classification_demo
- assets
- images
- tabby_cat.jpg # 测试图片
- labels
- synset_words.txt # 1000分类label文件
- models
- resnet50_fp32_224_fluid # Paddle fluid non-combined格式的resnet50 float32模型
- __model__ # Paddle fluid模型组网文件,可拖入https://lutzroeder.github.io/netron/进行可视化显示网络结构
- bn2a_branch1_mean # Paddle fluid模型参数文件
- bn2a_branch1_scale
...
- shell
- CMakeLists.txt # 示例程序CMake脚本
- build
- image_classification_demo # 已编译好的,适用于amd64的示例程序
- image_classification_demo.cc # 示例程序源码
- build.sh # 示例程序编译脚本
- run.sh # 示例程序运行脚本
- libs
- PaddleLite
- amd64
- include # PaddleLite头文件
- lib
- libiomp5.so # Intel OpenMP库
- libmklml_intel.so # Intel MKL库
- libxpuapi.so # XPU API库,提供设备管理和算子实现。
- llibxpurt.so # XPU runtime库
- libpaddle_full_api_shared.so # 预编译PaddleLite full api库
- arm64
- include # PaddleLite头文件
- lib
- libxpuapi.so # XPU API库,提供设备管理和算子实现。
- llibxpurt.so # XPU runtime库
- libpaddle_full_api_shared.so # 预编译PaddleLite full api库
```
- 进入PaddleLite-linux-demo/image_classification_demo/shell,直接执行./run.sh amd64即可;
```shell
$ cd PaddleLite-linux-demo/image_classification_demo/shell
$ ./run.sh amd64 # 默认已生成amd64版本的build/image_classification_demo,因此,无需重新编译示例程序就可以执行。
$ ./run.sh arm64 # 需要在arm64(FT-2000+/64)服务器上执行./build.sh arm64后才能执行该命令。
...
AUTOTUNE:(12758016, 16, 1, 2048, 7, 7, 512, 1, 1, 1, 1, 0, 0, 0) = 1by1_bsp(1, 32, 128, 128)
Find Best Result in 150 choices, avg-conv-op-time = 40 us
[INFO][XPUAPI][/home/qa_work/xpu_workspace/xpu_build_dailyjob/api_root/baidu/xpu/api/src/wrapper/conv.cpp:274] Start Tuning: (12758016, 16, 1, 512, 7, 7, 512, 3, 3, 1, 1, 1, 1, 0)
AUTOTUNE:(12758016, 16, 1, 512, 7, 7, 512, 3, 3, 1, 1, 1, 1, 0) = wpinned_bsp(1, 171, 16, 128)
Find Best Result in 144 choices, avg-conv-op-time = 79 us
I0502 22:34:18.176113 15876 io_copy_compute.cc:75] xpu to host, copy size 4000
I0502 22:34:18.176406 15876 io_copy_compute.cc:36] host to xpu, copy size 602112
I0502 22:34:18.176697 15876 io_copy_compute.cc:75] xpu to host, copy size 4000
iter 0 cost: 2.116000 ms
I0502 22:34:18.178530 15876 io_copy_compute.cc:36] host to xpu, copy size 602112
I0502 22:34:18.178792 15876 io_copy_compute.cc:75] xpu to host, copy size 4000
iter 1 cost: 2.101000 ms
I0502 22:34:18.180634 15876 io_copy_compute.cc:36] host to xpu, copy size 602112
I0502 22:34:18.180881 15876 io_copy_compute.cc:75] xpu to host, copy size 4000
iter 2 cost: 2.089000 ms
I0502 22:34:18.182726 15876 io_copy_compute.cc:36] host to xpu, copy size 602112
I0502 22:34:18.182976 15876 io_copy_compute.cc:75] xpu to host, copy size 4000
iter 3 cost: 2.085000 ms
I0502 22:34:18.184814 15876 io_copy_compute.cc:36] host to xpu, copy size 602112
I0502 22:34:18.185068 15876 io_copy_compute.cc:75] xpu to host, copy size 4000
iter 4 cost: 2.101000 ms
warmup: 1 repeat: 5, average: 2.098400 ms, max: 2.116000 ms, min: 2.085000 ms
results: 3
Top0 tabby, tabby cat - 0.689418
Top1 tiger cat - 0.190557
Top2 Egyptian cat - 0.112354
Preprocess time: 1.553000 ms
Prediction time: 2.098400 ms
Postprocess time: 0.081000 ms
```
- 如果需要更改测试图片,可将图片拷贝到PaddleLite-linux-demo/image_classification_demo/assets/images目录下,然后将run.sh的IMAGE_NAME设置成指定文件名即可;
- 如果需要重新编译示例程序,直接运行./build.sh amd64或./build.sh arm64即可。
```shell
$ cd PaddleLite-linux-demo/image_classification_demo/shell
$ ./build.sh amd64 # For amd64
$ ./build.sh arm64 # For arm64(FT-2000+/64)
```
### 更新模型
- 通过Paddle Fluid训练,或X2Paddle转换得到ResNet50 float32模型[resnet50_fp32_224_fluid](https://paddlelite-demo.bj.bcebos.com/models/resnet50_fp32_224_fluid.tar.gz)
- 由于XPU一般部署在Server端,因此将使用PaddleLite的full api加载原始的Paddle Fluid模型进行预测,即采用CXXConfig配置相关参数。
### 更新支持百度XPU的Paddle Lite库
- 下载PaddleLite源码;
```shell
$ git clone https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
$ git checkout <release-version-tag>
```
- 下载xpu_toolchain for amd64 or arm64(FT-2000+/64);
```shell
$ wget <URL_to_download_xpu_toolchain>
$ tar -xvf output.tar.gz
$ mv output xpu_toolchain
```
- 编译full_publish for amd64 or arm64(FT-2000+/64);
```shell
For amd64,如果报找不到cxx11::符号的编译错误,请将gcc切换到4.8版本。
$ ./lite/tools/build.sh --build_xpu=ON --xpu_sdk_root=./xpu_toolchain x86
For arm64(FT-2000+/64)
$ ./lite/tools/build.sh --arm_os=armlinux --arm_abi=armv8 --arm_lang=gcc --build_extra=ON --build_xpu=ON --xpu_sdk_root=./xpu_toolchain --with_log=ON full_publish
```
- 将编译生成的build.lite.x86/inference_lite_lib/cxx/include替换PaddleLite-linux-demo/libs/PaddleLite/amd64/include目录;
- 将编译生成的build.lite.x86/inference_lite_lib/cxx/include/lib/libpaddle_full_api_shared.so替换PaddleLite-linux-demo/libs/PaddleLite/amd64/lib/libpaddle_full_api_shared.so文件;
- 将编译生成的build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8.xpu/cxx/include替换PaddleLite-linux-demo/libs/PaddleLite/arm64/include目录;
- 将编译生成的build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8.xpu/cxx/lib/libpaddle_full_api_shared.so替换PaddleLite-linux-demo/libs/PaddleLite/arm64/lib/libpaddle_full_api_shared.so文件。
## 其它说明
- 如需更进一步的了解相关产品的信息,请联系欧阳剑ouyangjian@baidu.com;
- 百度昆仑的研发同学正在持续适配更多的Paddle算子,以便支持更多的Paddle模型。
......@@ -48,7 +48,7 @@ cuda的编译结果位于 `build_cuda/inference_lite_lib`
4、 `demo` 文件夹:c++ demo.
如果编译打开了python选项,则会在 `build_cuda/inference_lite_lib/python/lib/` 目录下生成 `lite_core.so`
如果编译打开了python选项,则会在 `build_cuda/inference_lite_lib/python/lib/` 目录下生成 `lite.so`
## 运行
......@@ -66,7 +66,7 @@ wget https://paddle-inference-dist.cdn.bcebos.com/PaddleLite/kite.jpg
二: 运行
**NOTE:**此处示例使用的是python接口。
**NOTE:** 此处示例使用的是python接口。
``` python
#-*- coding: utf-8 -*-
......@@ -75,7 +75,7 @@ import sys
import numpy as np
import cv2
sys.path.append('build_cuda/inference_lite_lib/python/lib')
from lite_core import *
from lite import *
def read_img(im_path, resize_h, resize_w):
im = cv2.imread(im_path).astype('float32')
......
# PaddleLite使用MTK APU预测部署
Paddle Lite已支持MTK APU的预测部署。
其接入原理是与之前华为NPU类似,即加载并分析Paddle模型,将Paddle算子转成MTK的Neuron adapter API(类似Android NN API)进行网络构建,在线生成并执行模型。
## 支持现状
### 已支持的芯片
- [MT8168](https://www.mediatek.cn/products/tablets/mt8168)/[MT8175](https://www.mediatek.cn/products/tablets/mt8175)及其他智能芯片。
### 已支持的设备
- MT8168-P2V1 Tablet。
### 已支持的Paddle模型
- [全量化MobileNetV1](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/mobilenet_v1_int8_224_fluid.tar.gz)
### 已支持(或部分支持)的Paddle算子
- relu
- conv2d
- depthwise_conv2d
- elementwise_add
- elementwise_mul
- fc
- pool2d
- softmax
## 参考示例演示
### 测试设备(MT8168-P2V1 Tablet)
![mt8168_p2v1_tablet_front](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/mt8168_p2v1_tablet_front.jpg)
![mt8168_p2v1_tablet_back](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/mt8168_p2v1_tablet_back.jpg)
### 准备设备环境
- 由于需要依赖特定版本的firmware,感兴趣的同学通过MTK官网[https://www.mediatek.cn/about/contact-us](https://www.mediatek.cn/about/contact-us)提供的联系方式(类别请选择"销售"),获取测试设备和firmware;
### 准备交叉编译环境
- 为了保证编译环境一致,建议参考[源码编译](../user_guides/source_compile)中的Docker开发环境进行配置。
### 运行图像分类示例程序
-[https://paddlelite-demo.bj.bcebos.com/devices/mediatek/PaddleLite-android-demo.tar.gz](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/PaddleLite-android-demo.tar.gz)下载示例程序,解压后清单如下:
```shell
- PaddleLite-android-demo
- image_classification_demo
- assets
- images
- tabby_cat.jpg # 测试图片
- labels
- synset_words.txt # 1000分类label文件
- models
- mobilenet_v1_int8_224_for_cpu.nb # 已通过opt转好的、适合arm cpu的mobilenetv1量化模型
- mobilenet_v1_int8_224_for_apu.nb # 已通过opt转好的、适合mtk apu的mobilenetv1量化模型
- shell # android shell端的示例程序
- CMakeLists.txt # 示例程序CMake脚本
- build
- image_classification_demo # 已编译好的android shell端的示例程序
- image_classification_demo.cc # 示例程序源码
- build.sh # 示例程序编译脚本
- run.sh # 示例程序运行脚本
- apk # 常规android应用程序
- app
- src
- main
- java # java层代码
- cpp # 自定义的jni实现
- app.iml
- build.gradle
- gradle
...
- libs
- PaddleLite
- arm64-v8a
- include # PaddleLite头文件
- lib
- libc++_shared.so
- libpaddle_light_api_shared.so # 预编译PaddleLite库
- OpenCV # OpenCV 4.2 for android
```
- Android shell端的示例程序
- 进入PaddleLite-android-demo/image_classification_demo/shell,直接执行./run.sh即可,注意:run.sh不能在docker环境执行,否则可能无法找到设备;
- 如果需要更改测试图片,可将图片拷贝到PaddleLite-android-demo/image_classification_demo/assets/images目录下,然后将run.sh的IMAGE_NAME设置成指定文件名即可;
- 如果需要重新编译示例程序,直接运行./build.sh即可,注意:build.sh的执行必须在docker环境中,否则可能编译出错;
- 需要说明的是,由于MTK APU暂时只支持NHWC的数据布局格式,而PaddleLite默认使用NCHW的数据布局格式,导致额外增加了预测中输入张量的NCHW到NHWC的转换,大约耗费8~9ms。
```shell
$ cd PaddleLite-android-demo/image_classification_demo/shell
$ ./run.sh
...
warmup: 5 repeat: 10, average: 30.998502 ms, max: 31.049002 ms, min: 30.937002 ms
results: 3
Top0 Egyptian cat - -0.122845
Top1 tabby, tabby cat - -0.122845
Top2 tiger cat - -0.544028
Preprocess time: 3.620000 ms
Prediction time: 30.998502 ms
Postprocess time: 0.069000 ms
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1b00000, pa = 0xfb3f9000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1af8000, pa = 0xfb3fa000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1af7000, pa = 0xf8ffe000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1af6000, pa = 0xf7bfe000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1af5000, pa = 0xf7bfd000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1b0c000, pa = 0xfb3fe000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1b0b000, pa = 0xfb3ff000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1b0a000, pa = 0xf31ff000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1b09000, pa = 0xfb3f6000, len = 255
[vpuBuffer] vpuMemAllocator::freeMem: type = 1, va = 0x7ed1b08000, pa = 0xf7bff000, len = 255
```
- 常规Android应用程序
- 安装Android Studio 3.4
- 打开Android Studio,在"Welcome to Android Studio"窗口点击"Open an existing Android Studio project",在弹出的路径选择窗口中进入"image_classification_demo"目录,然后点击右下角的"Open"按钮即可导入工程
- 通过USB连接Android手机、平板或开发板;
- 临时关闭selinux模式,允许app调用系统库;
```shell
$ adb root
# setenforce 0
```
- 载入工程后,点击菜单栏的Run->Run 'App'按钮,在弹出的"Select Deployment Target"窗口选择已经连接的Android设备,然后点击"OK"按钮;
- 等待大约1分钟后(第一次时间比较长,需要耐心等待),app已经安装到设备上。默认使用ARM CPU模型进行预测,由于MT8168的CPU由四核Arm-Cortex A53组成,性能较一般手机的A7x系列要弱很多,如下图所示,只有6fps;
![mt8168_p2v1_tablet_cpu](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/mt8168_p2v1_tablet_cpu.jpg)
- 点击app界面右下角的设置按钮,在弹出的设置页面点击"Choose pre-installed models",选择"mobilenet_v1_int8_for_apu",点击返回按钮后,app将切换到APU模型,如下图所示,帧率提高到14fps。
![mt8168_p2v1_tablet_apu](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/mt8168_p2v1_tablet_apu.jpg)
### 更新模型
- 通过Paddle Fluid训练,或X2Paddle转换得到MobileNetv1 foat32模型[mobilenet_v1_fp32_224_fluid](https://paddlelite-demo.bj.bcebos.com/models/mobilenet_v1_fp32_224_fluid.tar.gz)
- 参考[模型量化-有校准数据训练后量化](../user_guides/post_quant_with_data)使用PaddleSlim对float32模型进行量化(注意:由于MTK APU只支持量化OP,在启动量化脚本时请注意相关参数的设置),最终得到全量化MobileNetV1模型[mobilenet_v1_int8_224_fluid](https://paddlelite-demo.bj.bcebos.com/devices/mediatek/mobilenet_v1_int8_224_fluid.tar.gz)
- 参考[模型转化方法](../user_guides/model_optimize_tool),利用opt工具转换生成MTK APU模型,仅需要将valid_targets设置为apu,arm即可。
```shell
$ ./opt --model_dir=mobilenet_v1_int8_224_fluid \
--optimize_out_type=naive_buffer \
--optimize_out=mobilenet_v1_int8_224_for_apu \
--valid_targets=apu,arm
```
- 注意:opt生成的模型只是标记了MTK APU支持的Paddle算子,并没有真正生成MTK APU模型,只有在执行时才会将标记的Paddle算子转成MTK Neuron adapter API调用实现组网,最终生成并执行模型。
### 更新支持MTK APU的Paddle Lite库
- 下载PaddleLite源码和APU DDK;
```shell
$ git clone https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
$ git checkout <release-version-tag>
$ wget https://paddlelite-demo.bj.bcebos.com/devices/mediatek/apu_ddk.tar.gz
$ tar -xvf apu_ddk.tar.gz
```
- 编译full_publish and tiny_publish for MT8168-P2V1 Tablet
```shell
$ ./lite/tools/build.sh --arm_os=android --arm_abi=armv8 --arm_lang=gcc --android_stl=c++_shared --build_extra=ON --with_log=ON --build_apu=ON --apu_ddk_root=./apu_ddk full_publish
$ ./lite/tools/build.sh --arm_os=android --arm_abi=armv8 --arm_lang=gcc --android_stl=c++_shared --build_extra=ON --with_log=ON --build_apu=ON --apu_ddk_root=./apu_ddk tiny_publish
```
- 将编译生成的build.lite.android.armv8.gcc/inference_lite_lib.android.armv8.apu/cxx/include替换PaddleLite-android-demo/libs/PaddleLite/arm64-v8a/include目录;
- 将编译生成的build.lite.android.armv8.gcc/inference_lite_lib.android.armv8.apu/cxx/lib/libpaddle_light_api_shared.so替换PaddleLite-android-demo/libs/PaddleLite/arm64-v8a/lib/libpaddle_light_api_shared.so文件。
## 其它说明
- 由于涉及到License的问题,无法提供用于测试的firmware,我们深感抱歉。如果确实对此非常感兴趣,可以参照之前提到的联系方式,直接联系MTK的销售;
- MTK研发同学正在持续增加用于适配Paddle算子bridge/converter,以便适配更多Paddle模型。
......@@ -126,3 +126,80 @@ $ ./lite/tools/build_npu.sh --arm_os=android --arm_abi=armv7 --arm_lang=gcc --an
- 华为达芬奇架构的NPU内部大量采用float16进行运算,因此,预测结果会存在偏差,但大部分情况下精度不会有较大损失,可参考[Paddle-Lite-Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo)中Image Classification Demo for Android对同一张图片CPU与NPU的预测结果。
- 华为Kirin 810/990 Soc搭载的自研达芬奇架构的NPU,与Kirin 970/980 Soc搭载的寒武纪NPU不一样,同样的,与Hi3559A、Hi3519A使用的NNIE也不一样,Paddle Lite只支持华为自研达芬奇架构NPU。
- 我们正在持续增加能够适配HiAI IR的Paddle算子bridge/converter,以便适配更多Paddle模型,同时华为研发同学也在持续对HiAI IR性能进行优化。
## 手动分割子图
### 背景
- Paddle-Lite已经支持了大量的华为NPU的算子,但是仍然不能满足所有模型的需求。对于一个有部分算子不支持的模型,Paddle-Lite会将模型划分为可以跑在NPU上的子图和跑在CPU上的子图,实现NPU和CPU自动调度的功能,通常情况下可以获得比较好的性能。在一些特殊情况下,模型会被自动划分为比较多的子图,导致CPU和NPU的切换开销很大,从而导致整体性能变差。因此,需要手动分割子图的功能来指定一些算子跑在CPU上,避免子图过多。
### 功能
- 通过配置文件来指定需要强制跑在CPU上的算子
### 使用方法
- 1、通过netron打开paddle模型文件,可以查看模型结构,获得算子的类型、输入名称。输出名称。
- 注意:Paddle-Lite会对模型进行优化,模型算子可以改变,需要以优化后的模型算子为准。后面会举例说明。
- 2、生成配置文件 ```split_cfg.txt```,记录需要跑在CPU上的算子信息。
- 每行一条OP记录信息,以冒号":"分隔"op名称","op输入名","op输出名",以逗号","分隔"op输入名"和"op输出名"中的不同var名。
- 可以部分省略输入或者输出名。比如:```op3:in3_var0```表示,指定类型为"op3",输入为"in3_var0"的算子;```op4```表示所有类型为"op4"的算子
- 例子1:
```
op0:in0_var0,in0_var1:out0_var0,out0_var1
op1:in1_var0,in1_var1:out1_var0
op2::out2_var0
op3:in3_var0
op4
```
- 例子2:
```
transpose:conv2d_22.tmp_1:transpose_0.tmp_0
```
![image](https://user-images.githubusercontent.com/50474132/80475316-4a5fda80-897b-11ea-910a-6aee13243387.png)
- 3、使用环境变量```SUBGRAPH_CUSTOM_PARTITION_CONFIG_FILE```指定配置文件的位置。
- 例如:
```
export SUBGRAPH_CUSTOM_PARTITION_CONFIG_FILE=/data/local/tmp/split_sfg.txt
```
- 4、以上步骤完成后,运行的模型中符合条件的算子将被强制跑在CPU上。
### 举例
- 以模型[image](https://paddlelite-demo.bj.bcebos.com/models/ssd_mobilenet_v1_pascalvoc_fp32_300_fluid.tar.gz)为例
- 1、可以使用netron查看模型
- 2、初步分析
- 下图是ssd_mobilenet_v1中的部分结构。其中红色部分暂时不支持在NPU上运行,蓝色部分可能NPU上的性能不理想。此时,如果直接让预测库自动调度的话,可能会分成多个子图,而且整体性能不佳。因此,可以将蓝色部分和绿色部分整体指定在CPU上运行,让其他部分自动运行在NPU上(红色部分会自动在CPU上运行)。
![ssd_mobilenet_v1_example](https://user-images.githubusercontent.com/50474132/80453173-525b5280-895a-11ea-847f-c7dd5b5799de.png)
- 3、使用opt转换模型
- opt转换过程中会打印log信息。在log中搜索```digraph G```和```// end G```可以找到优化后的模型图。
![image](https://user-images.githubusercontent.com/50474132/80454098-145f2e00-895c-11ea-9f16-dde1483a9beb.png)
![image](https://user-images.githubusercontent.com/50474132/80454123-1de89600-895c-11ea-86b9-a62d78a6616d.png)
- 将从```digraph G```开始的,到```// end G```结束的整段模型图信息,保存到```.dot```格式的文件中。可以用```graphviz```打开查看,或者在[网页版](http://dreampuf.github.io/GraphvizOnline/)查看。
![image](https://user-images.githubusercontent.com/50474132/80454841-47ee8800-895d-11ea-9531-5689c5560fcb.png)
- 在此处确认需要被指定的算子是否被优化了。(期望是被指定的算子都还独立存在,如果被融合为了一个算子,需要指定此时融合后的算子)。
- 4、写配置文件
- 在配置文件中指定可以支持NPU但是需要指定在CPU上运行的算子。
```
reshape
transpose
concat
softmax
```
- 由于这些算子都指定在NPU上运行,因此不需要特意配置算子的输入输出名称。
- 5、指定配置文件路径
- 通过```export SUBGRAPH_CUSTOM_PARTITION_CONFIG_FILE=your_split_config_file```的方式实现。
- 6、性能测试
- 设备:华为mate30 5G
- HIAI ddk版本:320
- 性能:CPU约71.8ms,NPU约16.6ms。
# PaddleLite使用RK NPU预测部署
Paddle Lite已支持RK NPU的预测部署。
其接入原理是与之前华为NPU类似,即加载并分析Paddle模型,将Paddle算子转成RK组网API进行网络构建,在线生成并执行模型。
## 支持现状
### 已支持的芯片
- RK1808, RK1806,暂时不支持RK3399Pro。
### 已支持的设备
- RK1808/1806 EVB。
### 已支持的Paddle模型
- [全量化MobileNetV1](https://paddlelite-demo.bj.bcebos.com/devices/rockchip/mobilenet_v1_int8_224_fluid.tar.gz)
### 已支持(或部分支持)的Paddle算子
- relu
- conv2d
- depthwise_conv2d
- pool2d
- fc
- softmax
- batch_norm
- concat
- elementwise_add
- elementwise_sub
- elementwise_mul
- elementwise_div
## 参考示例演示
### 测试设备(RK1808 EVB)
![rk1808_evb_front](https://paddlelite-demo.bj.bcebos.com/devices/rockchip/rk1808_evb_front.jpg)
![rk1808_evb_back](https://paddlelite-demo.bj.bcebos.com/devices/rockchip/rk1808_evb_back.jpg)
### 准备设备环境
- 需要依赖特定版本的firmware,请参照[rknpu_ddk](https://github.com/airockchip/rknpu_ddk)的说明对设备进行firmware的更新;
- 由于RK1808 EVB在刷firmware后,只是一个纯净的Linux系统,无法像Ubuntu那样使用apt-get命令方便的安装软件,因此,示例程序和PaddleLite库的编译均采用交叉编译方式;
- 将MicroUSB线插入到设备的MicroUSB OTG口,就可以使用Android的adb命令进行设备的交互,再也不用配置网络使用ssh或者通过串口的方式访问设备了,这个设计非常赞!
### 准备交叉编译环境
- 为了保证编译环境一致,建议参考[源码编译](../user_guides/source_compile)中的Docker开发环境进行配置。
### 运行图像分类示例程序
-[https://paddlelite-demo.bj.bcebos.com/devices/rockchip/PaddleLite-linux-demo.tar.gz](https://paddlelite-demo.bj.bcebos.com/devices/rockchip/PaddleLite-linux-demo.tar.gz)下载示例程序,解压后清单如下:
```shell
- PaddleLite-linux-demo
- image_classification_demo
- assets
- images
- tabby_cat.jpg # 测试图片
- tabby_cat.raw # 已处理成raw数据的测试图片
- labels
- synset_words.txt # 1000分类label文件
- models
- mobilenet_v1_int8_224_for_cpu.nb # 已通过opt转好的、适合arm cpu的mobilenetv1量化模型
- mobilenet_v1_int8_224_for_rknpu.nb # 已通过opt转好的、适合rknpu的mobilenetv1量化模型
- shell
- CMakeLists.txt # 示例程序CMake脚本
- build
- image_classification_demo # 已编译好的示例程序
- image_classification_demo.cc # 示例程序源码
- convert_to_raw_image.py # 将测试图片保存为raw数据的python脚本
- build.sh # 示例程序编译脚本
- run.sh # 示例程序运行脚本
- libs
- PaddleLite
- arm64
- include # PaddleLite头文件
- lib
- libGAL.so # RK DDK库
- libOpenVX.so
- libVSC.so
- librknpu_ddk.so
- libgomp.so.1 # gnuomp库
- libpaddle_light_api_shared.so # 预编译PaddleLite库
- armhf
- include # PaddleLite头文件
- lib
- libGAL.so
- libOpenVX.so
- libVSC.so
- librknpu_ddk.so
- libgomp.so.1
- libpaddle_light_api_shared.so
```
- 进入PaddleLite-linux-demo/image_classification_demo/shell,直接执行./run.sh arm64即可,注意:run.sh不能在docker环境执行,否则无法找到设备;
```shell
$ cd PaddleLite-linux-demo/image_classification_demo/shell
$ ./run.sh arm64 # For RK1808 EVB
$ ./run.sh armhf # For RK1806 EVB
...
warmup: 5 repeat: 10, average: 6.499500 ms, max: 6.554000 ms, min: 6.468000 ms
results: 3
Top0 Egyptian cat - 0.532328
Top1 tabby, tabby cat - 0.345136
Top2 tiger cat - 0.111146
Preprocess time: 2.414000 ms
Prediction time: 6.499500 ms
Postprocess time: 0.414000 ms
```
- 如果需要更改测试图片,可通过convert_to_raw_image.py工具生成;
- 如果需要重新编译示例程序,直接运行./build.sh即可,注意:build.sh的执行必须在docker环境中,否则可能编译出错。
### 更新模型
- 通过Paddle Fluid训练,或X2Paddle转换得到MobileNetv1 foat32模型[mobilenet_v1_fp32_224_fluid](https://paddlelite-demo.bj.bcebos.com/models/mobilenet_v1_fp32_224_fluid.tar.gz)
- 参考[模型量化-有校准数据训练后量化](../user_guides/post_quant_with_data)使用PaddleSlim对float32模型进行量化(注意:由于RK NPU只支持tensor-wise的全量化模型,在启动量化脚本时请注意相关参数的设置),最终得到全量化MobileNetV1模型[mobilenet_v1_int8_224_fluid](https://paddlelite-demo.bj.bcebos.com/devices/rockchip/mobilenet_v1_int8_224_fluid.tar.gz)
- 参考[模型转化方法](../user_guides/model_optimize_tool),利用opt工具转换生成RKNPU模型,仅需要将valid_targets设置为rknpu,arm即可。
```shell
$ ./opt --model_dir=mobilenet_v1_int8_224_fluid \
--optimize_out_type=naive_buffer \
--optimize_out=mobilenet_v1_int8_224_for_rknpu \
--valid_targets=rknpu,arm
```
- 注意:opt生成的模型只是标记了RKNPU支持的Paddle算子,并没有真正生成RK NPU模型,只有在执行时才会将标记的Paddle算子转成RK NPU组网API,最终生成并执行模型。
### 更新支持RK NPU的Paddle Lite库
- 下载PaddleLite源码和RK DDK;
```shell
$ git clone https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
$ git checkout <release-version-tag>
$ git clone https://github.com/airockchip/rknpu_ddk.git
```
- 编译full_publish and tiny_publish for RK1808 and RK1806 EVB
```shell
For RK1808 EVB
$ ./lite/tools/build.sh --arm_os=armlinux --arm_abi=armv8 --arm_lang=gcc --build_extra=ON --with_log=ON --build_rknpu=ON --rknpu_ddk_root=./rknpu_ddk full_publish
$ ./lite/tools/build.sh --arm_os=armlinux --arm_abi=armv8 --arm_lang=gcc --build_extra=ON --with_log=ON --build_rknpu=ON --rknpu_ddk_root=./rknpu_ddk tiny_publish
For RK1806 EVB
$ ./lite/tools/build.sh --arm_os=armlinux --arm_abi=armv7 --arm_lang=gcc --build_extra=ON --with_log=ON --build_rknpu=ON --rknpu_ddk_root=./rknpu_ddk full_publish
$ ./lite/tools/build.sh --arm_os=armlinux --arm_abi=armv7 --arm_lang=gcc --build_extra=ON --with_log=ON --build_rknpu=ON --rknpu_ddk_root=./rknpu_ddk tiny_publish
```
- 将编译生成的build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8.rknpu/cxx/include替换PaddleLite-linux-demo/libs/PaddleLite/arm64/include目录;
- 将编译生成的build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8.rknpu/cxx/lib/libpaddle_light_api_shared.so替换PaddleLite-linux-demo/libs/PaddleLite/arm64/lib/libpaddle_light_api_shared.so文件;
- 将编译生成的build.lite.armlinux.armv7.gcc/inference_lite_lib.armlinux.armv7.rknpu/cxx/include替换PaddleLite-linux-demo/libs/PaddleLite/armhf/include目录;
- 将编译生成的build.lite.armlinux.armv7.gcc/inference_lite_lib.armlinux.armv7.rknpu/cxx/lib/libpaddle_light_api_shared.so替换PaddleLite-linux-demo/libs/PaddleLite/armhf/lib/libpaddle_light_api_shared.so文件。
## 其它说明
- RK研发同学正在持续增加用于适配Paddle算子bridge/converter,以便适配更多Paddle模型。
......@@ -2,15 +2,14 @@
Paddle-Lite 支持在Docker或Linux环境编译x86预测库。环境搭建参考[环境准备](../user_guides/source_compile)
(注意:非docker Linux环境需要是Ubuntu16.04)
## 编译
1、 下载代码
```bash
# 下载Paddle-Lite源码
git clone https://github.com/PaddlePaddle/Paddle-Lite.git
# 切换到release分支
git checkout release/v2.3
git checkout release/v2.6.0
```
2、 源码编译
......@@ -18,6 +17,9 @@ git checkout release/v2.3
```bash
cd Paddle-Lite
./lite/tools/build.sh x86
# 其他可选择编译选项
# --with_log=OFF 关闭LOG信息输出
```
## 编译结果说明
......@@ -31,35 +33,68 @@ x86编译结果位于 `build.lite.x86/inference_lite_lib`
- `include` : 头文件
- `lib` : 库文件
- 打包的静态库文件:
- `libpaddle_api_full_bundled.a` :包含 full_api 和 light_api 功能的静态库
- `libpaddle_api_light_bundled.a` :只包含 light_api 功能的静态库
- 打包的动态态库文件:
- `libpaddle_full_api_shared.so` :包含 full_api 和 light_api 功能的动态库
- `libpaddle_light_api_shared.so`:只包含 light_api 功能的动态库
- 静态库文件:
- `libpaddle_api_full_bundled.a` :full_api 静态库
- `libpaddle_api_light_bundled.a` :light_api 静态库
- 动态库文件:
- `libpaddle_full_api_shared.so` :full_api 动态库
- `libpaddle_light_api_shared.so`:light_api 动态库
3、 `third_party` 文件夹:依赖的第三方预测库mklml
- mklml : Paddle-Lite预测库依赖的mklml数学库
4、 `demo/cxx`文件夹:x86预测库的C++ 示例demo
- `mobilenetv1_full` :使用full_api 执行mobilenet_v1预测的C++ demo
- `mobilenetv1_light` :使用light_api 执行mobilenet_v1预测的C++ demo
3、 `third_party` 文件夹:第三方库文件
## x86预测API使用示例
1、我们提供Linux环境下x86 API运行mobilenet_v1的示例:[mobilenet_full_x86demo](https://paddlelite-data.bj.bcebos.com/x86/mobilenet_full_x86demo.zip)。下载解压后内容如下:
1、`mobilenetv1_full`目录结构
![](https://paddlelite-data.bj.bcebos.com/x86/x86-doc/demo.png)
```bash
mobilenetv1_full/
|-- CMakeLists.txt
|-- build.sh
`-- mobilenet_full_api.cc
```
`mobilenet_v1`为模型文件、`lib``include`分别是Paddle-Lite的预测库和头文件、`third_party`下是编译时依赖的第三方库`mklml``mobilenet_full_api.cc`是x86示例的源代码、`build.sh`为编译的脚本。
本demo使用cmake构建`CMakeLists.txt`为cmake脚本,`mobilenet_full_api.cc`是x86示例的源代码、`build.sh`为编译的脚本。
2、demo内容与使用方法
2、demo使用方法
``` bash
# 1、编译
cd mobilenetv1_full
sh build.sh
```
编译结果为当前目录下的 `mobilenet_full_api `
``` bash
# 2、执行预测
mobilenet_full_api mobilenet_v1
./mobilenet_full_api ./mobilenet_v1
```
`mobilenet_v1`为当前目录下的模型路径,`mobilenet_full_api`为第一步编译出的可执行文件。
下载并解压模型[`mobilenet_v1`](http://paddle-inference-dist.bj.bcebos.com/mobilenet_v1.tar.gz)到当前目录,执行以上命令进行预测。
```bash
# 3、执行demo后输出结果如下,全一输入下mobilenet_v1的预测结果
Output shape 1000
Output[0]: 0.000191312
Output[100]: 0.000159713
Output[200]: 0.000264313
Output[300]: 0.000210793
Output[400]: 0.00103236
Output[500]: 0.000110071
Output[600]: 0.00482924
Output[700]: 0.00184533
Output[800]: 0.000202116
Output[900]: 0.000585591
```
3、示例源码`mobilenet_full_api.cc`
......
......@@ -54,6 +54,9 @@ Welcome to Paddle-Lite's documentation!
demo_guides/opencl
demo_guides/fpga
demo_guides/npu
demo_guides/baidu_xpu
demo_guides/rockchip_npu
demo_guides/mediatek_apu
.. toctree::
:maxdepth: 1
......
......@@ -16,7 +16,6 @@ message(STATUS "LITE_WITH_MLU:\t${LITE_WITH_MLU}")
message(STATUS "LITE_WITH_BM:\t${LITE_WITH_BM}")
message(STATUS "LITE_WITH_PROFILE:\t${LITE_WITH_PROFILE}")
message(STATUS "LITE_WITH_CV:\t${LITE_WITH_CV}")
message(STATUS "LITE_WITH_ARM_LANG:\t${LITE_WITH_ARM_LANG}")
set(LITE_MODEL_DIR "${THIRD_PARTY_PATH}/install")
set(LITE_ON_MOBILE ${LITE_WITH_LIGHT_WEIGHT_FRAMEWORK})
......@@ -188,15 +187,17 @@ if (LITE_WITH_CUDA OR LITE_WITH_X86)
COMMAND cp "${CMAKE_BINARY_DIR}/libpaddle_api_light_bundled.a" "${INFER_LITE_PUBLISH_ROOT}/cxx/lib"
COMMAND cp "${CMAKE_BINARY_DIR}/lite/api/*.so" "${INFER_LITE_PUBLISH_ROOT}/cxx/lib"
)
add_custom_target(publish_inference_third_party ${TARGET}
COMMAND mkdir -p "${INFER_LITE_PUBLISH_ROOT}/third_party"
COMMAND cp -r "${CMAKE_BINARY_DIR}/third_party/install/*" "${INFER_LITE_PUBLISH_ROOT}/third_party")
if (LITE_WITH_CUDA)
add_custom_target(publish_inference_third_party ${TARGET}
COMMAND mkdir -p "${INFER_LITE_PUBLISH_ROOT}/third_party"
COMMAND cp -r "${CMAKE_BINARY_DIR}/third_party/install/*" "${INFER_LITE_PUBLISH_ROOT}/third_party")
add_dependencies(publish_inference publish_inference_third_party)
endif()
add_dependencies(publish_inference_cxx_lib bundle_full_api)
add_dependencies(publish_inference_cxx_lib bundle_light_api)
add_dependencies(publish_inference_cxx_lib paddle_full_api_shared)
add_dependencies(publish_inference_cxx_lib paddle_light_api_shared)
add_dependencies(publish_inference publish_inference_cxx_lib)
add_dependencies(publish_inference publish_inference_third_party)
endif()
endif()
......@@ -238,9 +239,13 @@ if (LITE_WITH_X86)
add_dependencies(publish_inference_x86_cxx_lib test_model_bin)
add_custom_target(publish_inference_x86_cxx_demos ${TARGET}
COMMAND rm -rf "${INFER_LITE_PUBLISH_ROOT}/demo/cxx"
COMMAND mkdir -p "${INFER_LITE_PUBLISH_ROOT}/demo/cxx"
COMMAND cp -r "${CMAKE_SOURCE_DIR}/lite/demo/cxx/x86_mobilenetv1_light_demo" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx/mobilenetv1_light"
COMMAND cp -r "${CMAKE_SOURCE_DIR}/lite/demo/cxx/x86_mobilenetv1_full_demo" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx/mobilenetv1_full"
COMMAND mkdir -p "${INFER_LITE_PUBLISH_ROOT}/third_party"
COMMAND cp -r "${CMAKE_BINARY_DIR}/third_party/eigen3" "${INFER_LITE_PUBLISH_ROOT}/third_party"
)
COMMAND cp -r "${CMAKE_BINARY_DIR}/third_party/install/mklml" "${INFER_LITE_PUBLISH_ROOT}/third_party/"
)
add_dependencies(publish_inference_x86_cxx_lib publish_inference_x86_cxx_demos)
add_dependencies(publish_inference_x86_cxx_demos paddle_full_api_shared eigen3)
add_dependencies(publish_inference publish_inference_x86_cxx_lib)
......@@ -369,6 +374,8 @@ if (LITE_WITH_LIGHT_WEIGHT_FRAMEWORK AND LITE_WITH_ARM)
COMMAND cp "${CMAKE_SOURCE_DIR}/lite/demo/cxx/makefiles/test_cv/Makefile.${ARM_TARGET_OS}.${ARM_TARGET_ARCH_ABI}" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx/test_cv/Makefile"
COMMAND cp -r "${CMAKE_SOURCE_DIR}/lite/demo/cxx/mask_detection" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx"
COMMAND cp "${CMAKE_SOURCE_DIR}/lite/demo/cxx/makefiles/mask_detection/Makefile.${ARM_TARGET_OS}.${ARM_TARGET_ARCH_ABI}" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx/mask_detection/Makefile"
COMMAND cp -r "${CMAKE_SOURCE_DIR}/lite/demo/cxx/test_libs" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx"
COMMAND cp "${CMAKE_SOURCE_DIR}/lite/demo/cxx/makefiles/test_libs/Makefile.${ARM_TARGET_OS}.${ARM_TARGET_ARCH_ABI}" "${INFER_LITE_PUBLISH_ROOT}/demo/cxx/test_libs/Makefile"
)
add_dependencies(publish_inference_android_cxx_demos logging gflags)
add_dependencies(publish_inference_cxx_lib publish_inference_android_cxx_demos)
......
if(LITE_WITH_LIGHT_WEIGHT_FRAMEWORK)
if(LITE_WITH_LIGHT_WEIGHT_FRAMEWORK OR (NOT LITE_WITH_LOG))
lite_cc_library(place SRCS paddle_place.cc DEPS logging)
else()
lite_cc_library(place SRCS paddle_place.cc DEPS glog)
......
......@@ -48,6 +48,7 @@ USE_LITE_OP(concat)
USE_LITE_OP(conv2d)
USE_LITE_OP(depthwise_conv2d)
USE_LITE_OP(pool2d)
USE_LITE_OP(max_pool2d_with_index)
USE_LITE_OP(batch_norm)
USE_LITE_OP(fusion_elementwise_sub_activation)
USE_LITE_OP(transpose)
......
......@@ -151,6 +151,11 @@ std::vector<std::string> Predictor::GetInputNames() { return input_names_; }
// get outputnames
std::vector<std::string> Predictor::GetOutputNames() { return output_names_; }
// get param names
std::vector<std::string> Predictor::GetParamNames() {
return exec_scope_->AttributeVarNames();
}
// append the names of inputs and outputs into input_names_ and output_names_
void Predictor::PrepareFeedFetch() {
if (!program_) {
......@@ -293,6 +298,7 @@ void Predictor::Build(const cpp::ProgramDesc &desc,
// `inner_places` is used to optimize passes
std::vector<Place> inner_places = valid_places;
for (auto &valid_place : valid_places) {
if (valid_place.target == TARGET(kOpenCL)) continue;
inner_places.emplace_back(
Place(TARGET(kHost), valid_place.precision, valid_place.layout));
}
......@@ -345,9 +351,16 @@ void Predictor::GenRuntimeProgram() {
const lite::Tensor *Predictor::GetTensor(const std::string &name) const {
auto *var = exec_scope_->FindVar(name);
CHECK(var) << "no variable named with " << name << " in exec_scope";
return &var->Get<lite::Tensor>();
}
lite::Tensor *Predictor::GetMutableTensor(const std::string &name) {
auto *var = exec_scope_->FindVar(name);
CHECK(var) << "no variable named with " << name << " in exec_scope";
return var->GetMutable<lite::Tensor>();
}
// get input by name
lite::Tensor *Predictor::GetInputByName(const std::string &name) {
auto element = std::find(input_names_.begin(), input_names_.end(), name);
......
......@@ -85,6 +85,9 @@ class LITE_API Predictor {
// get inputnames and get outputnames.
std::vector<std::string> GetInputNames();
std::vector<std::string> GetOutputNames();
// get param names
std::vector<std::string> GetParamNames();
void PrepareFeedFetch();
// Get offset-th col of fetch results.
......@@ -92,6 +95,9 @@ class LITE_API Predictor {
std::vector<const lite::Tensor*> GetOutputs() const;
const cpp::ProgramDesc& program_desc() const;
// get a mutable tensor according to its name
lite::Tensor* GetMutableTensor(const std::string& name);
// get a const tensor according to its name
const lite::Tensor* GetTensor(const std::string& name) const;
const RuntimeProgram& runtime_program() const;
......@@ -142,9 +148,15 @@ class CxxPaddleApiImpl : public lite_api::PaddlePredictor {
// get inputs names and get outputs names
std::vector<std::string> GetInputNames() override;
std::vector<std::string> GetOutputNames() override;
// get param names
std::vector<std::string> GetParamNames() override;
// get tensor according to tensor's name
std::unique_ptr<const lite_api::Tensor> GetTensor(
const std::string& name) const override;
// get a mutable tensor according to tensor's name
std::unique_ptr<lite_api::Tensor> GetMutableTensor(
const std::string& name) override;
// Get InputTebsor by name
std::unique_ptr<lite_api::Tensor> GetInputByName(
......
......@@ -97,6 +97,10 @@ std::vector<std::string> CxxPaddleApiImpl::GetInputNames() {
return raw_predictor_.GetInputNames();
}
std::vector<std::string> CxxPaddleApiImpl::GetParamNames() {
return raw_predictor_.GetParamNames();
}
std::vector<std::string> CxxPaddleApiImpl::GetOutputNames() {
return raw_predictor_.GetOutputNames();
}
......@@ -123,6 +127,12 @@ std::unique_ptr<const lite_api::Tensor> CxxPaddleApiImpl::GetTensor(
return std::unique_ptr<const lite_api::Tensor>(new lite_api::Tensor(x));
}
std::unique_ptr<lite_api::Tensor> CxxPaddleApiImpl::GetMutableTensor(
const std::string &name) {
return std::unique_ptr<lite_api::Tensor>(
new lite_api::Tensor(raw_predictor_.GetMutableTensor(name)));
}
std::unique_ptr<lite_api::Tensor> CxxPaddleApiImpl::GetInputByName(
const std::string &name) {
return std::unique_ptr<lite_api::Tensor>(
......
......@@ -36,7 +36,7 @@ DEFINE_string(model_dir_0, "", "model_dir_0");
DEFINE_string(input_shape_0,
"1,3,224,224",
"input shapes another, separated by colon and comma");
DEFINE_string(target, "arm", "main target for Predictor: arm, opencl");
DEFINE_bool(use_optimize_nb,
false,
"optimized & naive buffer model for mobile devices");
......@@ -51,9 +51,19 @@ void OutputOptModel(const std::string& load_model_dir,
const std::vector<std::vector<int64_t>>& input_shapes) {
lite_api::CxxConfig config;
config.set_model_dir(load_model_dir);
config.set_valid_places({
Place{TARGET(kARM), PRECISION(kFloat)},
});
if (FLAGS_target == "arm") {
config.set_valid_places({
Place{TARGET(kARM), PRECISION(kFloat)},
});
} else if (FLAGS_target == "opencl") {
config.set_valid_places({
Place{TARGET(kOpenCL), PRECISION(kFP16), DATALAYOUT(kImageDefault)},
Place{TARGET(kOpenCL), PRECISION(kFloat), DATALAYOUT(kNCHW)},
Place{TARGET(kOpenCL), PRECISION(kAny), DATALAYOUT(kImageDefault)},
Place{TARGET(kOpenCL), PRECISION(kAny), DATALAYOUT(kNCHW)},
Place{TARGET(kARM)}, // enable kARM CPU kernel when no opencl kernel
});
}
auto predictor = lite_api::CreatePaddlePredictor(config);
// delete old optimized model
......@@ -78,7 +88,7 @@ void Run(const std::vector<std::vector<int64_t>>& input_shapes,
int tid,
const int warmup_times = 5) {
lite_api::MobileConfig config;
config.set_model_dir(model_dir);
config.set_model_from_file(model_dir + ".nb");
config.set_power_mode(power_mode);
config.set_threads(thread_num);
......@@ -197,7 +207,7 @@ void RunTestType_10(const std::vector<std::vector<int64_t>>& input_shapes,
const int repeat,
int warmup = 5) {
lite_api::MobileConfig config;
config.set_model_dir(model_dir);
config.set_model_from_file(model_dir + ".nb");
config.set_power_mode(power_mode);
config.set_threads(thread_num);
......@@ -218,13 +228,13 @@ void RunTestType_11(const std::vector<std::vector<int64_t>>& input_shapes,
const int repeat,
int warmup = 5) {
lite_api::MobileConfig config;
config.set_model_dir(model_dir);
config.set_model_from_file(model_dir + ".nb");
config.set_power_mode(power_mode);
config.set_threads(thread_num);
auto predictor = lite_api::CreatePaddlePredictor(config);
config.set_model_dir(model_dir_0);
config.set_model_from_file(model_dir_0 + ".nb");
auto predictor_0 = lite_api::CreatePaddlePredictor(config);
for (int i = 0; i < 2 * repeat; i += 2) {
......@@ -246,7 +256,8 @@ int main(int argc, char** argv) {
gflags::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_model_dir == "") {
LOG(INFO) << "usage: "
<< "--model_dir /path/to/your/model";
<< "--model_dir /path/to/your/model --model_dir_0 "
"/path/to/your/model0 --target `arm` or `opencl`";
exit(0);
}
std::string save_optimized_model_dir = "";
......
......@@ -55,7 +55,7 @@ DEFINE_string(model_file, "", "model file path of the combined-param model");
DEFINE_string(param_file, "", "param file path of the combined-param model");
DEFINE_string(
optimize_out_type,
"protobuf",
"naive_buffer",
"store type of the output optimized model. protobuf/naive_buffer");
DEFINE_bool(display_kernels, false, "Display kernel information");
DEFINE_bool(record_tailoring_info,
......@@ -207,7 +207,7 @@ void PrintOpsInfo(std::set<std::string> valid_ops = {}) {
}
std::cout << std::setiosflags(std::ios::internal);
std::cout << std::setw(maximum_optype_length) << "OP_name";
for (int i = 0; i < targets.size(); i++) {
for (size_t i = 0; i < targets.size(); i++) {
std::cout << std::setw(10) << targets[i].substr(1);
}
std::cout << std::endl;
......@@ -215,7 +215,7 @@ void PrintOpsInfo(std::set<std::string> valid_ops = {}) {
for (auto it = supported_ops.begin(); it != supported_ops.end(); it++) {
std::cout << std::setw(maximum_optype_length) << it->first;
auto ops_valid_places = it->second;
for (int i = 0; i < targets.size(); i++) {
for (size_t i = 0; i < targets.size(); i++) {
if (std::find(ops_valid_places.begin(),
ops_valid_places.end(),
targets[i]) != ops_valid_places.end()) {
......@@ -235,7 +235,7 @@ void PrintOpsInfo(std::set<std::string> valid_ops = {}) {
}
// Print OP info.
auto ops_valid_places = supported_ops.at(*op);
for (int i = 0; i < targets.size(); i++) {
for (size_t i = 0; i < targets.size(); i++) {
if (std::find(ops_valid_places.begin(),
ops_valid_places.end(),
targets[i]) != ops_valid_places.end()) {
......@@ -288,11 +288,11 @@ void ParseInputCommand() {
auto valid_places = paddle::lite_api::ParserValidPlaces();
// get valid_targets string
std::vector<TargetType> target_types = {};
for (int i = 0; i < valid_places.size(); i++) {
for (size_t i = 0; i < valid_places.size(); i++) {
target_types.push_back(valid_places[i].target);
}
std::string targets_str = TargetToStr(target_types[0]);
for (int i = 1; i < target_types.size(); i++) {
for (size_t i = 1; i < target_types.size(); i++) {
targets_str = targets_str + TargetToStr(target_types[i]);
}
......@@ -301,7 +301,7 @@ void ParseInputCommand() {
target_types.push_back(TARGET(kUnk));
std::set<std::string> valid_ops;
for (int i = 0; i < target_types.size(); i++) {
for (size_t i = 0; i < target_types.size(); i++) {
auto ops = supported_ops_target[static_cast<int>(target_types[i])];
valid_ops.insert(ops.begin(), ops.end());
}
......@@ -318,7 +318,7 @@ void CheckIfModelSupported() {
auto valid_unktype_ops = supported_ops_target[static_cast<int>(TARGET(kUnk))];
valid_ops.insert(
valid_ops.end(), valid_unktype_ops.begin(), valid_unktype_ops.end());
for (int i = 0; i < valid_places.size(); i++) {
for (size_t i = 0; i < valid_places.size(); i++) {
auto target = valid_places[i].target;
auto ops = supported_ops_target[static_cast<int>(target)];
valid_ops.insert(valid_ops.end(), ops.begin(), ops.end());
......@@ -340,7 +340,7 @@ void CheckIfModelSupported() {
std::set<std::string> unsupported_ops;
std::set<std::string> input_model_ops;
for (int index = 0; index < cpp_prog.BlocksSize(); index++) {
for (size_t index = 0; index < cpp_prog.BlocksSize(); index++) {
auto current_block = cpp_prog.GetBlock<lite::cpp::BlockDesc>(index);
for (size_t i = 0; i < current_block->OpsSize(); ++i) {
auto& op_desc = *current_block->GetOp<lite::cpp::OpDesc>(i);
......@@ -364,13 +364,13 @@ void CheckIfModelSupported() {
unsupported_ops_str = unsupported_ops_str + ", " + *op_str;
}
std::vector<TargetType> targets = {};
for (int i = 0; i < valid_places.size(); i++) {
for (size_t i = 0; i < valid_places.size(); i++) {
targets.push_back(valid_places[i].target);
}
std::sort(targets.begin(), targets.end());
targets.erase(unique(targets.begin(), targets.end()), targets.end());
std::string targets_str = TargetToStr(targets[0]);
for (int i = 1; i < targets.size(); i++) {
for (size_t i = 1; i < targets.size(); i++) {
targets_str = targets_str + "," + TargetToStr(targets[i]);
}
......
......@@ -82,27 +82,56 @@ void OptBase::SetValidPlaces(const std::string& valid_places) {
"command argument 'valid_targets'";
}
void OptBase::SetOptimizeOut(const std::string& optimized_out_path) {
optimize_out_path_ = optimized_out_path;
void OptBase::SetLiteOut(const std::string& lite_out_name) {
lite_out_name_ = lite_out_name;
}
void OptBase::RunOptimize(bool record_strip_info) {
void OptBase::RecordModelInfo(bool record_strip_info) {
record_strip_info_ = record_strip_info;
}
void OptBase::Run() {
CheckIfModelSupported(false);
OpKernelInfoCollector::Global().SetKernel2path(kernel2path_map);
opt_config_.set_valid_places(valid_places_);
if (model_set_dir_ != "") {
RunOptimizeFromModelSet(record_strip_info);
RunOptimizeFromModelSet(record_strip_info_);
} else {
auto opt_predictor = lite_api::CreatePaddlePredictor(opt_config_);
opt_predictor->SaveOptimizedModel(
optimize_out_path_, model_type_, record_strip_info);
lite_out_name_, model_type_, record_strip_info_);
auto resulted_model_name =
record_strip_info ? "information of striped model" : "optimized model";
record_strip_info_ ? "information of striped model" : "optimized model";
std::cout << "Save the " << resulted_model_name
<< " into :" << optimize_out_path_ << "successfully";
<< " into :" << lite_out_name_ << "successfully";
}
}
void OptBase::RunOptimize(const std::string& model_dir_path,
const std::string& model_path,
const std::string& param_path,
const std::string& valid_places,
const std::string& optimized_out_path) {
SetModelDir(model_dir_path);
SetModelFile(model_path);
SetParamFile(param_path);
SetValidPlaces(valid_places);
SetLiteOut(optimized_out_path);
CheckIfModelSupported(false);
OpKernelInfoCollector::Global().SetKernel2path(kernel2path_map);
opt_config_.set_valid_places(valid_places_);
if (model_set_dir_ != "") {
RunOptimizeFromModelSet(record_strip_info_);
} else {
auto opt_predictor = lite_api::CreatePaddlePredictor(opt_config_);
opt_predictor->SaveOptimizedModel(
lite_out_name_, model_type_, record_strip_info_);
auto resulted_model_name =
record_strip_info_ ? "information of striped model" : "optimized model";
std::cout << "Save the " << resulted_model_name
<< " into :" << lite_out_name_ << "successfully";
}
}
// collect ops info of modelset
void CollectModelMetaInfo(const std::string& output_dir,
const std::vector<std::string>& models,
......@@ -125,7 +154,7 @@ void OptBase::SetModelSetDir(const std::string& model_set_path) {
}
void OptBase::RunOptimizeFromModelSet(bool record_strip_info) {
// 1. mkdir of outputed optimized model set.
lite::MkDirRecur(optimize_out_path_);
lite::MkDirRecur(lite_out_name_);
auto model_dirs = lite::ListDir(model_set_dir_, true);
if (model_dirs.size() == 0) {
LOG(FATAL) << "[" << model_set_dir_ << "] does not contain any model";
......@@ -138,7 +167,7 @@ void OptBase::RunOptimizeFromModelSet(bool record_strip_info) {
std::string input_model_dir =
lite::Join<std::string>({model_set_dir_, name}, "/");
std::string output_model_dir =
lite::Join<std::string>({optimize_out_path_, name}, "/");
lite::Join<std::string>({lite_out_name_, name}, "/");
if (opt_config_.model_file() != "" && opt_config_.param_file() != "") {
auto model_file_path =
......@@ -155,7 +184,7 @@ void OptBase::RunOptimizeFromModelSet(bool record_strip_info) {
auto opt_predictor = lite_api::CreatePaddlePredictor(opt_config_);
opt_predictor->SaveOptimizedModel(
optimize_out_path_, model_type_, record_strip_info);
lite_out_name_, model_type_, record_strip_info);
std::cout << "Optimize done. ";
}
......@@ -164,46 +193,60 @@ void OptBase::RunOptimizeFromModelSet(bool record_strip_info) {
if (record_strip_info) {
// Collect all models information
CollectModelMetaInfo(
optimize_out_path_, model_dirs, lite::TAILORD_OPS_SOURCE_LIST_FILENAME);
lite_out_name_, model_dirs, lite::TAILORD_OPS_SOURCE_LIST_FILENAME);
CollectModelMetaInfo(
lite_out_name_, model_dirs, lite::TAILORD_OPS_LIST_NAME);
CollectModelMetaInfo(
optimize_out_path_, model_dirs, lite::TAILORD_OPS_LIST_NAME);
CollectModelMetaInfo(optimize_out_path_,
model_dirs,
lite::TAILORD_KERNELS_SOURCE_LIST_FILENAME);
lite_out_name_, model_dirs, lite::TAILORD_KERNELS_SOURCE_LIST_FILENAME);
CollectModelMetaInfo(
optimize_out_path_, model_dirs, lite::TAILORD_KERNELS_LIST_NAME);
lite_out_name_, model_dirs, lite::TAILORD_KERNELS_LIST_NAME);
std::cout << "Record the information of stripped models into :"
<< optimize_out_path_ << "successfully";
<< lite_out_name_ << "successfully";
}
}
void OptBase::PrintHelpInfo() {
const std::string opt_version = lite::version();
const char help_info[] =
"At least one argument should be inputed. Valid arguments are listed "
"below:\n"
"------------------------------------------------------------------------"
"-----------------------------------------------------------\n"
" Valid arguments of Paddle-Lite opt are listed below:\n"
"------------------------------------------------------------------------"
"-----------------------------------------------------------\n"
" Arguments of help information:\n"
" `help()` Print help infomation\n"
" Arguments of model optimization:\n"
"\n"
" Arguments of model transformation:\n"
" `set_model_dir(model_dir)`\n"
" `set_model_file(model_file_path)`\n"
" `set_param_file(param_file_path)`\n"
" `set_model_type(protobuf|naive_buffer)`\n"
" `set_optimize_out(output_optimize_model_dir)`\n"
" `set_model_type(protobuf|naive_buffer)`: naive_buffer by "
"default\n"
" `set_lite_out(output_optimize_model_dir)`\n"
" `set_valid_places(arm|opencl|x86|npu|xpu|rknpu|apu)`\n"
" `run_optimize(false|true)`\n"
" ` ----fasle&true refer to whether to record ops info for "
"tailoring lib, false by default`\n"
" Arguments of model checking and ops information:\n"
" `record_model_info(false|true)`: refer to whether to record ops "
"info for striping lib, false by default`\n"
" `run() : start model transformation`\n"
" eg. `opt.set_model_dir(\"./mobilenetv1\"); "
"opt.set_lite_out(\"mobilenetv1_opt\"); opt.set_valid_places(\"arm\"); "
"opt.run();`\n"
"\n"
" You can also transform model through a single input argument:\n"
" `run_optimize(model_dir, model_file_path, param_file_path, "
"model_type, valid_places, lite_out_name) `\n"
" eg. `opt.run_optimize(\"./mobilenetv1\", \"\", \"\", "
"\"naive_buffer\", \"arm\", \"mobilenetv1_opt\");`"
"\n"
" Arguments of checking model and printing ops information:\n"
" `print_all_ops()` Display all the valid operators of "
"Paddle-Lite\n"
" `print_supported_ops` Display supported operators of valid "
"places\n"
" `check_if_model_supported()` Check if the input model is "
"supported\n";
std::cout << "opt version:" << opt_version << std::endl
<< help_info << std::endl;
"supported\n"
"------------------------------------------------------------------------"
"-----------------------------------------------------------\n";
std::cout << "opt version:" << opt_version << std::endl << help_info;
}
// 2. Print supported info of inputed ops
void OptBase::PrintOpsInfo(const std::set<std::string>& valid_ops) {
......
......@@ -44,16 +44,21 @@ class LITE_API OptBase {
public:
OptBase() = default;
void SetModelSetDir(const std::string &model_set_path);
void SetModelDir(const std::string &model_path);
void SetModelDir(const std::string &model_dir_path);
void SetModelFile(const std::string &model_path);
void SetParamFile(const std::string &param_path);
void SetValidPlaces(const std::string &valid_places);
void SetOptimizeOut(const std::string &optimized_out_path);
void SetLiteOut(const std::string &lite_out_name);
void RecordModelInfo(bool record_strip_info = true);
// set optimized_model type
void SetModelType(std::string model_type);
// transform and save the optimized model
void RunOptimize(bool record_strip_info = false);
void Run();
void RunOptimize(const std::string &model_dir_path = "",
const std::string &model_path = "",
const std::string &param_path = "",
const std::string &valid_places = "",
const std::string &optimized_out_path = "");
// fuctions of printing info
// 1. help info
void PrintHelpInfo();
......@@ -71,12 +76,12 @@ class LITE_API OptBase {
// valid places for the optimized_model
std::vector<Place> valid_places_;
// filename of the optimized_model
std::string optimize_out_path_;
std::string lite_out_name_;
// type of the optimized_model, kNaiveBuffer default.
LiteModelType model_type_{LiteModelType::kNaiveBuffer};
// Dir path of a set of models, this should be combined with model
std::string model_set_dir_;
bool record_strip_info_{false};
void RunOptimizeFromModelSet(bool record_strip_info = false);
};
......
......@@ -167,6 +167,20 @@ lod_t Tensor::lod() const { return ctensor(raw_tensor_)->lod(); }
void Tensor::SetLoD(const lod_t &lod) { tensor(raw_tensor_)->set_lod(lod); }
std::unique_ptr<Tensor> PaddlePredictor::GetMutableTensor(
const std::string &name) {
LOG(FATAL)
<< "The GetMutableTensor API is only supported by CxxConfig predictor.";
return nullptr;
}
std::vector<std::string> PaddlePredictor::GetParamNames() {
std::vector<std::string> null_result = {};
LOG(FATAL)
<< "The GetParamNames API is only supported by CxxConfig predictor.";
return null_result;
}
void PaddlePredictor::SaveOptimizedModel(const std::string &model_dir,
LiteModelType model_type,
bool record_info) {
......
......@@ -86,6 +86,8 @@ class LITE_API PaddlePredictor {
virtual std::vector<std::string> GetInputNames() = 0;
// Get output names
virtual std::vector<std::string> GetOutputNames() = 0;
// Get output names
virtual std::vector<std::string> GetParamNames();
// Get Input by name
virtual std::unique_ptr<Tensor> GetInputByName(const std::string& name) = 0;
......@@ -93,6 +95,9 @@ class LITE_API PaddlePredictor {
/// Get a readonly tensor, return null if no one called `name` exists.
virtual std::unique_ptr<const Tensor> GetTensor(
const std::string& name) const = 0;
/// Get a mutable tensor, return null if on one called `name` exists
/// internal infereces API, not recommanded.
virtual std::unique_ptr<Tensor> GetMutableTensor(const std::string& name);
/// Persist the optimized model to disk. This API is only supported by
/// CxxConfig, and the persisted model can be reused for MobileConfig.
......@@ -176,7 +181,7 @@ class LITE_API CxxConfig : public ConfigBase {
#endif
#ifdef LITE_WITH_CUDA
void set_multi_stream(bool multi_stream) { multi_stream_ = multi_stream; }
int multi_stream() const { return multi_stream_; }
bool multi_stream() const { return multi_stream_; }
#endif
#ifdef LITE_WITH_MLU
......@@ -208,6 +213,8 @@ class LITE_API CxxConfig : public ConfigBase {
// current thread.
void set_xpu_workspace_l3_size_per_thread(int l3_size = 0xfffc00);
// XPU only, specify the target device ID for the current thread.
// **DEPRECATED**, use xpu_set_device() at the very beginning of each worker
// thread
void set_xpu_dev_per_thread(int dev_no = 0);
};
......
......@@ -19,7 +19,13 @@
#pragma once
// some platform-independent defintion
#include "lite/utils/macros.h"
#if defined(_WIN32)
#define UNUSED
#define __builtin_expect(EXP, C) (EXP)
#else
#define UNUSED __attribute__((unused))
#endif
#define USE_LITE_OP(op_type__) \
extern int touch_op_##op_type__(); \
......
......@@ -161,6 +161,7 @@ std::set<TargetType> ExpandValidTargets(TargetType target) {
TARGET(kBM),
TARGET(kMLU),
TARGET(kAPU),
TARGET(kRKNPU),
TARGET(kFPGA)});
if (target == TARGET(kAny)) {
return valid_set;
......
......@@ -33,6 +33,7 @@ USE_MIR_PASS(lite_transpose_softmax_transpose_fuse_pass);
USE_MIR_PASS(lite_interpolate_fuse_pass);
USE_MIR_PASS(lite_sequence_pool_concat_fuse_pass);
USE_MIR_PASS(identity_scale_eliminate_pass);
USE_MIR_PASS(identity_dropout_eliminate_pass);
USE_MIR_PASS(lite_conv_elementwise_fuse_pass);
USE_MIR_PASS(lite_conv_activation_fuse_pass);
USE_MIR_PASS(lite_var_conv_2d_activation_fuse_pass);
......@@ -51,5 +52,8 @@ USE_MIR_PASS(mlu_postprocess_pass);
USE_MIR_PASS(weight_quantization_preprocess_pass);
USE_MIR_PASS(apu_subgraph_pass);
USE_MIR_PASS(quantized_op_attributes_inference_pass);
USE_MIR_PASS(lite_scale_activation_fuse_pass);
USE_MIR_PASS(__xpu__resnet_fuse_pass);
USE_MIR_PASS(__xpu__multi_encoder_fuse_pass);
USE_MIR_PASS(__xpu__embedding_with_eltwise_add_fuse_pass);
USE_MIR_PASS(__xpu__fc_fuse_pass);
......@@ -62,8 +62,10 @@ void BindLiteOpt(py::module *m) {
.def("set_model_file", &OptBase::SetModelFile)
.def("set_param_file", &OptBase::SetParamFile)
.def("set_valid_places", &OptBase::SetValidPlaces)
.def("set_optimize_out", &OptBase::SetOptimizeOut)
.def("set_lite_out", &OptBase::SetLiteOut)
.def("set_model_type", &OptBase::SetModelType)
.def("record_model_info", &OptBase::RecordModelInfo)
.def("run", &OptBase::Run)
.def("run_optimize", &OptBase::RunOptimize)
.def("help", &OptBase::PrintHelpInfo)
.def("print_supported_ops", &OptBase::PrintSupportedOps)
......
......@@ -33,11 +33,17 @@ else:
PADDLELITE_VERSION = PADDLELITE_TAG
# core lib of paddlelite is stored as lite.so
LITE_PATH = '${PADDLE_BINARY_DIR}/inference_lite_lib/python/install/lite'
files = os.listdir('${PADDLE_BINARY_DIR}')
INFERENCE_LITE_LIB_PATH = ''
for file in files:
if file.find('inference_lite_lib') == 0:
INFERENCE_LITE_LIB_PATH = '${PADDLE_BINARY_DIR}/' + file
break
LITE_PATH = INFERENCE_LITE_LIB_PATH + '/python/install/lite'
PACKAGE_DATA = {'paddlelite': ['lite.so' if os.name!='nt' else 'lite.pyd']}
# put all thirdparty libraries in paddlelite.libs
PACKAGE_DATA['paddlelite.libs'] = []
LIB_PATH = '${PADDLE_BINARY_DIR}/inference_lite_lib/python/install/libs'
LIB_PATH = INFERENCE_LITE_LIB_PATH + '/python/install/libs/'
if '${WITH_MKL}' == 'ON':
shutil.copy('${MKLML_SHARED_IOMP_LIB}', LIB_PATH)
shutil.copy('${MKLML_SHARED_LIB}', LIB_PATH)
......@@ -49,8 +55,7 @@ if '${WITH_MKL}' == 'ON':
PACKAGE_DATA['paddlelite.libs'] += ['msvcr120.dll']
# link lite.so to paddlelite.libs
if os.name != 'nt':
COMMAND = "patchelf --set-rpath '$ORIGIN/../libs/' ${PADDLE_BINARY_DIR}\
/inference_lite_lib/python/install/lite/lite.so"
COMMAND = "patchelf --set-rpath '$ORIGIN/../libs/' " + LITE_PATH + "/lite.so"
if os.system(COMMAND) != 0:
raise Exception("patch third_party libs failed, command: %s" % COMMAND)
......
......@@ -15,6 +15,7 @@
#include <gflags/gflags.h>
#include <gtest/gtest.h>
#include <fstream>
#include <thread> //NOLINT
#include <vector>
#include "lite/api/cxx_api.h"
#include "lite/api/paddle_use_kernels.h"
......@@ -30,14 +31,18 @@ DEFINE_string(input_img_txt_path,
namespace paddle {
namespace lite {
void TestModel(const std::vector<Place>& valid_places) {
const int g_batch_size = 1;
const int g_thread_num = 1;
void instance_run() {
lite::Predictor predictor;
std::vector<std::string> passes;
std::vector<Place> valid_places({Place{TARGET(kBM), PRECISION(kFloat)},
Place{TARGET(kX86), PRECISION(kFloat)}});
predictor.Build(FLAGS_model_dir, "", "", valid_places, passes);
auto* input_tensor = predictor.GetInput(0);
input_tensor->Resize(DDim(
std::vector<DDim::value_type>({1, 3, FLAGS_im_height, FLAGS_im_width})));
input_tensor->Resize(DDim(std::vector<DDim::value_type>(
{g_batch_size, 3, FLAGS_im_height, FLAGS_im_width})));
auto* data = input_tensor->mutable_data<float>();
auto item_size = input_tensor->dims().production();
if (FLAGS_input_img_txt_path.empty()) {
......@@ -45,12 +50,15 @@ void TestModel(const std::vector<Place>& valid_places) {
data[i] = 1;
}
} else {
std::fstream fs(FLAGS_input_img_txt_path, std::ios::in);
if (!fs.is_open()) {
LOG(FATAL) << "open input_img_txt error.";
}
for (int i = 0; i < item_size; i++) {
fs >> data[i];
for (int j = 0; j < g_batch_size; j++) {
std::fstream fs(FLAGS_input_img_txt_path, std::ios::in);
if (!fs.is_open()) {
LOG(FATAL) << "open input_img_txt error.";
}
for (int i = 0; i < item_size / g_batch_size; i++) {
fs >> data[i];
}
data += j * item_size / g_batch_size;
}
}
for (int i = 0; i < FLAGS_warmup; ++i) {
......@@ -72,6 +80,7 @@ void TestModel(const std::vector<Place>& valid_places) {
FILE* fp = fopen("result.txt", "wb");
for (int i = 0; i < out.size(); i++) {
auto* out_data = out[i]->data<float>();
LOG(INFO) << out[i]->numel();
for (int j = 0; j < out[i]->numel(); j++) {
fprintf(fp, "%f\n", out_data[j]);
}
......@@ -79,6 +88,16 @@ void TestModel(const std::vector<Place>& valid_places) {
fclose(fp);
}
void TestModel(const std::vector<Place>& valid_places) {
std::vector<std::unique_ptr<std::thread>> instances_vec;
for (int i = 0; i < g_thread_num; ++i) {
instances_vec.emplace_back(new std::thread(&instance_run));
}
for (int i = 0; i < g_thread_num; ++i) {
instances_vec[i]->join();
}
}
TEST(Classify, test_bm) {
std::vector<Place> valid_places({Place{TARGET(kBM), PRECISION(kFloat)},
Place{TARGET(kX86), PRECISION(kFloat)}});
......
......@@ -2,4 +2,5 @@ if(NOT LITE_WITH_APU)
return()
endif()
lite_cc_library(device_apu SRCS device.cc)
lite_cc_library(neuron_adapter SRCS neuron_adapter.cc)
lite_cc_library(device_apu SRCS device.cc DEPS neuron_adapter)
......@@ -20,48 +20,19 @@ namespace paddle {
namespace lite {
namespace apu {
inline void* LoadFunc(void* libHandle, const char* name) {
CHECK(libHandle != nullptr);
CHECK(name != nullptr);
void* fn = dlsym(libHandle, name);
if (fn == nullptr) {
LOG(WARNING) << "Unable to open Neuron Runtime function [" << name
<< "] Because " << dlerror();
}
return fn;
}
NeuronCompilation* Device::Build(void* libHandle, NeuronModel* model) {
typedef int (*NeuronCompilation_create)(NeuronModel * model,
NeuronCompilation * *compilation);
typedef void (*NeuronCompilation_free)(NeuronCompilation * compilation);
typedef int (*NeuronCompilation_finish)(NeuronCompilation * compilation);
#define LOAD_FUNCTIONS(libHandle, FUNC_NAME, VARIABLE_NAME) \
FUNC_NAME VARIABLE_NAME = \
reinterpret_cast<FUNC_NAME>(LoadFunc(libHandle, #FUNC_NAME));
LOAD_FUNCTIONS(libHandle, NeuronCompilation_create, neuron_compilation_create)
LOAD_FUNCTIONS(libHandle, NeuronCompilation_free, neuron_compilation_free)
LOAD_FUNCTIONS(libHandle, NeuronCompilation_finish, neuron_compilation_finish)
#undef LOAD_FUNCTIONS
int neuron_errCode = 0;
NeuronCompilation* compilation = NULL;
NeuronCompilation* Device::Build(NeuronModel* model) {
VLOG(3) << "[APU] Compile model";
neuron_errCode = (*neuron_compilation_create)(model, &compilation);
NeuronCompilation* compilation = NULL;
int neuron_errCode = NeuronCompilation_create(model, &compilation);
if (NEURON_NO_ERROR != neuron_errCode) {
LOG(WARNING) << "[APU] create compile failed! " << neuron_errCode;
return nullptr;
}
neuron_errCode = (*neuron_compilation_finish)(compilation);
neuron_errCode = NeuronCompilation_finish(compilation);
if (NEURON_NO_ERROR != neuron_errCode) {
LOG(WARNING) << "[APU] compile failed! " << neuron_errCode;
return nullptr;
}
VLOG(3) << "[APU] Build done";
return compilation;
}
......
......@@ -18,7 +18,7 @@
#include <string>
#include <unordered_map>
#include <vector>
#include "NeuronAdapter.h" // NOLINT
#include "lite/backends/apu/neuron_adapter.h"
namespace paddle {
namespace lite {
......@@ -32,7 +32,7 @@ class Device {
}
Device() {}
NeuronCompilation* Build(void* libHandle, NeuronModel* model);
NeuronCompilation* Build(NeuronModel* model);
};
} // namespace apu
......
/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "lite/backends/apu/neuron_adapter.h"
#include <dlfcn.h>
#include <string>
#include <vector>
namespace paddle {
namespace lite {
NeuronAdapter* NeuronAdapter::Global() {
static NeuronAdapter adapter;
return &adapter;
}
NeuronAdapter::NeuronAdapter() {
CHECK(InitHandle()) << "Fail to initialize the Neuron Adapter library!";
InitFunctions();
}
bool NeuronAdapter::InitHandle() {
const std::vector<std::string> paths = {
"libneuron_adapter.so",
#if defined(__aarch64__)
"/vendor/lib64/libneuron_adapter.so",
"/system/lib64/libneuron_adapter.so",
"/system/vendor/lib64/libneuron_adapter.so",
#else
"/vendor/lib/libneuron_adapter.so",
"/system/lib/libneuron_adapter.so",
"/system/vendor/lib/libneuron_adapter.so",
#endif
};
std::string target_lib = "Unknown";
for (auto path : paths) {
handle_ = dlopen(path.c_str(), RTLD_LAZY);
if (handle_ != nullptr) {
target_lib = path;
break;
}
}
VLOG(4) << "Load the Neuron Adapter library from " << target_lib;
if (handle_ != nullptr) {
return true;
} else {
return false;
}
}
void NeuronAdapter::InitFunctions() {
CHECK(handle_ != nullptr) << "The library handle can't be null!";
#define PADDLE_DLSYM(neuron_adapter_func) \
do { \
neuron_adapter_func##_ = \
(neuron_adapter_func##_Type)dlsym(handle_, #neuron_adapter_func); \
if (neuron_adapter_func##_ == nullptr) { \
LOG(FATAL) << "Cannot find the " << #neuron_adapter_func \
<< " symbol in libneuron_adapter.so!"; \
break; \
} \
VLOG(4) << "Loaded the " << #neuron_adapter_func \
<< " symbol successfully."; \
} while (false)
PADDLE_DLSYM(Neuron_getVersion);
PADDLE_DLSYM(NeuronModel_create);
PADDLE_DLSYM(NeuronModel_free);
PADDLE_DLSYM(NeuronModel_finish);
PADDLE_DLSYM(NeuronModel_addOperand);
PADDLE_DLSYM(NeuronModel_setOperandValue);
PADDLE_DLSYM(NeuronModel_setOperandSymmPerChannelQuantParams);
PADDLE_DLSYM(NeuronModel_addOperation);
PADDLE_DLSYM(NeuronModel_identifyInputsAndOutputs);
PADDLE_DLSYM(NeuronCompilation_create);
PADDLE_DLSYM(NeuronCompilation_free);
PADDLE_DLSYM(NeuronCompilation_finish);
PADDLE_DLSYM(NeuronExecution_create);
PADDLE_DLSYM(NeuronExecution_free);
PADDLE_DLSYM(NeuronExecution_setInput);
PADDLE_DLSYM(NeuronExecution_setOutput);
PADDLE_DLSYM(NeuronExecution_compute);
#undef PADDLE_DLSYM
}
} // namespace lite
} // namespace paddle
int Neuron_getVersion(uint32_t* version) {
return paddle::lite::NeuronAdapter::Global()->Neuron_getVersion()(version);
}
int NeuronModel_create(NeuronModel** model) {
return paddle::lite::NeuronAdapter::Global()->NeuronModel_create()(model);
}
void NeuronModel_free(NeuronModel* model) {
return paddle::lite::NeuronAdapter::Global()->NeuronModel_free()(model);
}
int NeuronModel_finish(NeuronModel* model) {
return paddle::lite::NeuronAdapter::Global()->NeuronModel_finish()(model);
}
int NeuronModel_addOperand(NeuronModel* model, const NeuronOperandType* type) {
return paddle::lite::NeuronAdapter::Global()->NeuronModel_addOperand()(model,
type);
}
int NeuronModel_setOperandValue(NeuronModel* model,
int32_t index,
const void* buffer,
size_t length) {
return paddle::lite::NeuronAdapter::Global()->NeuronModel_setOperandValue()(
model, index, buffer, length);
}
int NeuronModel_setOperandSymmPerChannelQuantParams(
NeuronModel* model,
int32_t index,
const NeuronSymmPerChannelQuantParams* channelQuant) {
return paddle::lite::NeuronAdapter::Global()
->NeuronModel_setOperandSymmPerChannelQuantParams()(
model, index, channelQuant);
}
int NeuronModel_addOperation(NeuronModel* model,
NeuronOperationType type,
uint32_t inputCount,
const uint32_t* inputs,
uint32_t outputCount,
const uint32_t* outputs) {
return paddle::lite::NeuronAdapter::Global()->NeuronModel_addOperation()(
model, type, inputCount, inputs, outputCount, outputs);
}
int NeuronModel_identifyInputsAndOutputs(NeuronModel* model,
uint32_t inputCount,
const uint32_t* inputs,
uint32_t outputCount,
const uint32_t* outputs) {
return paddle::lite::NeuronAdapter::Global()
->NeuronModel_identifyInputsAndOutputs()(
model, inputCount, inputs, outputCount, outputs);
}
int NeuronCompilation_create(NeuronModel* model,
NeuronCompilation** compilation) {
return paddle::lite::NeuronAdapter::Global()->NeuronCompilation_create()(
model, compilation);
}
void NeuronCompilation_free(NeuronCompilation* compilation) {
return paddle::lite::NeuronAdapter::Global()->NeuronCompilation_free()(
compilation);
}
int NeuronCompilation_finish(NeuronCompilation* compilation) {
return paddle::lite::NeuronAdapter::Global()->NeuronCompilation_finish()(
compilation);
}
int NeuronExecution_create(NeuronCompilation* compilation,
NeuronExecution** execution) {
return paddle::lite::NeuronAdapter::Global()->NeuronExecution_create()(
compilation, execution);
}
void NeuronExecution_free(NeuronExecution* execution) {
return paddle::lite::NeuronAdapter::Global()->NeuronExecution_free()(
execution);
}
int NeuronExecution_setInput(NeuronExecution* execution,
int32_t index,
const NeuronOperandType* type,
const void* buffer,
size_t length) {
return paddle::lite::NeuronAdapter::Global()->NeuronExecution_setInput()(
execution, index, type, buffer, length);
}
int NeuronExecution_setOutput(NeuronExecution* execution,
int32_t index,
const NeuronOperandType* type,
void* buffer,
size_t length) {
return paddle::lite::NeuronAdapter::Global()->NeuronExecution_setOutput()(
execution, index, type, buffer, length);
}
int NeuronExecution_compute(NeuronExecution* execution) {
return paddle::lite::NeuronAdapter::Global()->NeuronExecution_compute()(
execution);
}
/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include "NeuronAdapter.h" // NOLINT
#include "lite/utils/cp_logging.h"
namespace paddle {
namespace lite {
class NeuronAdapter final {
public:
static NeuronAdapter *Global();
// Platform APIs
using Neuron_getVersion_Type = int (*)(uint32_t *);
using NeuronModel_create_Type = int (*)(NeuronModel **);
using NeuronModel_free_Type = void (*)(NeuronModel *);
using NeuronModel_finish_Type = int (*)(NeuronModel *);
using NeuronModel_addOperand_Type = int (*)(NeuronModel *,
const NeuronOperandType *);
using NeuronModel_setOperandValue_Type = int (*)(NeuronModel *,
int32_t,
const void *,
size_t);
using NeuronModel_setOperandSymmPerChannelQuantParams_Type =
int (*)(NeuronModel *, int32_t, const NeuronSymmPerChannelQuantParams *);
using NeuronModel_addOperation_Type = int (*)(NeuronModel *,
NeuronOperationType,
uint32_t,
const uint32_t *,
uint32_t,
const uint32_t *);
using NeuronModel_identifyInputsAndOutputs_Type = int (*)(
NeuronModel *, uint32_t, const uint32_t *, uint32_t, const uint32_t *);
using NeuronCompilation_create_Type = int (*)(NeuronModel *,
NeuronCompilation **);
using NeuronCompilation_free_Type = void (*)(NeuronCompilation *);
using NeuronCompilation_finish_Type = int (*)(NeuronCompilation *);
using NeuronExecution_create_Type = int (*)(NeuronCompilation *,
NeuronExecution **);
using NeuronExecution_free_Type = void (*)(NeuronExecution *);
using NeuronExecution_setInput_Type = int (*)(NeuronExecution *,
int32_t,
const NeuronOperandType *,
const void *,
size_t);
using NeuronExecution_setOutput_Type = int (*)(
NeuronExecution *, int32_t, const NeuronOperandType *, void *, size_t);
using NeuronExecution_compute_Type = int (*)(NeuronExecution *);
Neuron_getVersion_Type Neuron_getVersion() {
CHECK(Neuron_getVersion_ != nullptr) << "Cannot load Neuron_getVersion!";
return Neuron_getVersion_;
}
NeuronModel_create_Type NeuronModel_create() {
CHECK(NeuronModel_create_ != nullptr) << "Cannot load NeuronModel_create!";
return NeuronModel_create_;
}
NeuronModel_free_Type NeuronModel_free() {
CHECK(NeuronModel_free_ != nullptr) << "Cannot load NeuronModel_free!";
return NeuronModel_free_;
}
NeuronModel_finish_Type NeuronModel_finish() {
CHECK(NeuronModel_finish_ != nullptr) << "Cannot load NeuronModel_finish!";
return NeuronModel_finish_;
}
NeuronModel_addOperand_Type NeuronModel_addOperand() {
CHECK(NeuronModel_addOperand_ != nullptr)
<< "Cannot load NeuronModel_addOperand!";
return NeuronModel_addOperand_;
}
NeuronModel_setOperandValue_Type NeuronModel_setOperandValue() {
CHECK(NeuronModel_setOperandValue_ != nullptr)
<< "Cannot load NeuronModel_setOperandValue!";
return NeuronModel_setOperandValue_;
}
NeuronModel_setOperandSymmPerChannelQuantParams_Type
NeuronModel_setOperandSymmPerChannelQuantParams() {
CHECK(NeuronModel_setOperandSymmPerChannelQuantParams_ != nullptr)
<< "Cannot load NeuronModel_setOperandSymmPerChannelQuantParams!";
return NeuronModel_setOperandSymmPerChannelQuantParams_;
}
NeuronModel_addOperation_Type NeuronModel_addOperation() {
CHECK(NeuronModel_addOperation_ != nullptr)
<< "Cannot load NeuronModel_addOperation!";
return NeuronModel_addOperation_;
}
NeuronModel_identifyInputsAndOutputs_Type
NeuronModel_identifyInputsAndOutputs() {
CHECK(NeuronModel_identifyInputsAndOutputs_ != nullptr)
<< "Cannot load NeuronModel_identifyInputsAndOutputs!";
return NeuronModel_identifyInputsAndOutputs_;
}
NeuronCompilation_create_Type NeuronCompilation_create() {
CHECK(NeuronCompilation_create_ != nullptr)
<< "Cannot load NeuronCompilation_create!";
return NeuronCompilation_create_;
}
NeuronCompilation_free_Type NeuronCompilation_free() {
CHECK(NeuronCompilation_free_ != nullptr)
<< "Cannot load NeuronCompilation_free!";
return NeuronCompilation_free_;
}
NeuronCompilation_finish_Type NeuronCompilation_finish() {
CHECK(NeuronCompilation_finish_ != nullptr)
<< "Cannot load NeuronCompilation_finish!";
return NeuronCompilation_finish_;
}
NeuronExecution_create_Type NeuronExecution_create() {
CHECK(NeuronExecution_create_ != nullptr)
<< "Cannot load NeuronExecution_create!";
return NeuronExecution_create_;
}
NeuronExecution_free_Type NeuronExecution_free() {
CHECK(NeuronExecution_free_ != nullptr)
<< "Cannot load NeuronExecution_free!";
return NeuronExecution_free_;
}
NeuronExecution_setInput_Type NeuronExecution_setInput() {
CHECK(NeuronExecution_setInput_ != nullptr)
<< "Cannot loadcl NeuronExecution_setInput!";
return NeuronExecution_setInput_;
}
NeuronExecution_setOutput_Type NeuronExecution_setOutput() {
CHECK(NeuronExecution_setOutput_ != nullptr)
<< "Cannot load NeuronExecution_setOutput!";
return NeuronExecution_setOutput_;
}
NeuronExecution_compute_Type NeuronExecution_compute() {
CHECK(NeuronExecution_compute_ != nullptr)
<< "Cannot load NeuronExecution_compute!";
return NeuronExecution_compute_;
}
private:
NeuronAdapter();
NeuronAdapter(const NeuronAdapter &) = delete;
NeuronAdapter &operator=(const NeuronAdapter &) = delete;
bool InitHandle();
void InitFunctions();
void *handle_{nullptr};
Neuron_getVersion_Type Neuron_getVersion_{nullptr};
NeuronModel_create_Type NeuronModel_create_{nullptr};
NeuronModel_free_Type NeuronModel_free_{nullptr};
NeuronModel_finish_Type NeuronModel_finish_{nullptr};
NeuronModel_addOperand_Type NeuronModel_addOperand_{nullptr};
NeuronModel_setOperandValue_Type NeuronModel_setOperandValue_{nullptr};
NeuronModel_setOperandSymmPerChannelQuantParams_Type
NeuronModel_setOperandSymmPerChannelQuantParams_{nullptr};
NeuronModel_addOperation_Type NeuronModel_addOperation_{nullptr};
NeuronModel_identifyInputsAndOutputs_Type
NeuronModel_identifyInputsAndOutputs_{nullptr};
NeuronCompilation_create_Type NeuronCompilation_create_{nullptr};
NeuronCompilation_free_Type NeuronCompilation_free_{nullptr};
NeuronCompilation_finish_Type NeuronCompilation_finish_{nullptr};
NeuronExecution_create_Type NeuronExecution_create_{nullptr};
NeuronExecution_free_Type NeuronExecution_free_{nullptr};
NeuronExecution_setInput_Type NeuronExecution_setInput_{nullptr};
NeuronExecution_setOutput_Type NeuronExecution_setOutput_{nullptr};
NeuronExecution_compute_Type NeuronExecution_compute_{nullptr};
};
} // namespace lite
} // namespace paddle
......@@ -80,8 +80,10 @@ void conv_compute_6x6_3x3(const float* input,
const operators::ConvParam& param,
ARMContext* ctx) {
auto act_param = param.activation_param;
const int pad_h = (*param.paddings)[0];
const int pad_w = (*param.paddings)[2];
const int pad_h0 = (*param.paddings)[0];
const int pad_h1 = (*param.paddings)[1];
const int pad_w0 = (*param.paddings)[2];
const int pad_w1 = (*param.paddings)[3];
float* tmp_work_space =
ctx->workspace_data<float>() + ctx->llc_size() / sizeof(float);
......@@ -96,8 +98,8 @@ void conv_compute_6x6_3x3(const float* input,
int tile_h = (hout + 5) / 6;
int size_tile = tile_h * tile_w;
int w_pad = win + pad_w * 2;
int h_pad = hin + pad_h * 2;
int w_pad = win + pad_w0 + pad_w1;
int h_pad = hin + pad_h0 + pad_h1;
const int zero_len = w_pad;
float zero_ptr[zero_len]; // NOLINT
......@@ -127,10 +129,10 @@ void conv_compute_6x6_3x3(const float* input,
prepack_input_nxwc4_dw(input + ni * in_n_stride,
input_c4 + i * new_c_stride,
i * 4,
-pad_h,
hin + pad_h,
-pad_w,
win + pad_w,
-pad_h0,
hin + pad_h1,
-pad_w0,
win + pad_w1,
chin,
win,
hin,
......@@ -367,8 +369,10 @@ void conv_compute_2x2_3x3(const float* input,
const operators::ConvParam& param,
ARMContext* ctx) {
auto act_param = param.activation_param;
const int pad_h = (*param.paddings)[0];
const int pad_w = (*param.paddings)[2];
const int pad_h0 = (*param.paddings)[0];
const int pad_h1 = (*param.paddings)[1];
const int pad_w0 = (*param.paddings)[2];
const int pad_w1 = (*param.paddings)[3];
float* tmp_work_space =
ctx->workspace_data<float>() + ctx->llc_size() / sizeof(float);
......@@ -383,8 +387,8 @@ void conv_compute_2x2_3x3(const float* input,
int tile_h = (hout + 1) / 2;
int size_tile = tile_h * tile_w;
int w_pad = win + pad_w * 2;
int h_pad = hin + pad_h * 2;
int w_pad = win + pad_w0 + pad_w1;
int h_pad = hin + pad_h0 + pad_h1;
const int zero_len = w_pad;
float zero_ptr[zero_len]; // NOLINT
......@@ -414,10 +418,10 @@ void conv_compute_2x2_3x3(const float* input,
prepack_input_nxwc4_dw(input + ni * in_n_stride,
input_c4 + i * new_c_stride,
i * 4,
-pad_h,
hin + pad_h,
-pad_w,
win + pad_w,
-pad_h0,
hin + pad_h1,
-pad_w0,
win + pad_w1,
chin,
win,
hin,
......@@ -628,8 +632,10 @@ void conv_compute_2x2_3x3_small(const float* input,
const operators::ConvParam& param,
ARMContext* ctx) {
auto act_param = param.activation_param;
const int pad_h = (*param.paddings)[0];
const int pad_w = (*param.paddings)[2];
const int pad_h0 = (*param.paddings)[0];
const int pad_h1 = (*param.paddings)[1];
const int pad_w0 = (*param.paddings)[2];
const int pad_w1 = (*param.paddings)[3];
float* tmp_work_space =
ctx->workspace_data<float>() + ctx->llc_size() / sizeof(float);
......@@ -644,8 +650,8 @@ void conv_compute_2x2_3x3_small(const float* input,
int tile_h = (hout + 1) / 2;
int size_tile = tile_h * tile_w;
int w_pad = win + pad_w * 2;
int h_pad = hin + pad_h * 2;
int w_pad = win + pad_w0 + pad_w1;
int h_pad = hin + pad_h0 + pad_h1;
const int zero_len = w_pad;
float zero_ptr[zero_len]; // NOLINT
......@@ -676,10 +682,10 @@ void conv_compute_2x2_3x3_small(const float* input,
prepack_input_nxwc4_dw(input + ni * in_n_stride,
input_c4 + i * new_c_stride,
i * 4,
-pad_h,
hin + pad_h,
-pad_w,
win + pad_w,
-pad_h0,
hin + pad_h1,
-pad_w0,
win + pad_w1,
chin,
win,
hin,
......
......@@ -33,6 +33,7 @@ void add_bias_rowwise(Tensor* input,
for (int w = start_w; w < w_adds; ++w) {
i_data[w] += b_data[w];
}
i_data += width;
}
}
void vector_dot(
......@@ -67,15 +68,8 @@ void vector_dot(
for (int i = 0; i < remain; ++i) {
if (!v2) {
out_ptr[i] = in_ptr[i] * v1_ptr[i];
++out_ptr;
++in_ptr;
++v1_ptr;
} else {
out_ptr[i] = in_ptr[i] + v1_ptr[i] * v2_ptr[i];
++out_ptr;
++in_ptr;
++v1_ptr;
++v2_ptr;
}
}
}
......
......@@ -21,6 +21,17 @@ namespace paddle {
namespace lite {
namespace arm {
namespace math {
int AdaptStartIndex(int ph, int input_size, int output_size) {
return static_cast<int>(
floor(static_cast<double>(ph * input_size) / output_size));
}
int AdaptEndIndex(int ph, int input_size, int output_size) {
return static_cast<int>(
ceil(static_cast<double>((ph + 1) * input_size) / output_size));
}
void pooling_basic(const float* din,
float* dout,
int num,
......@@ -88,15 +99,27 @@ void pooling_basic(const float* din,
#pragma omp parallel for
for (int ind_c = 0; ind_c < chin; ++ind_c) {
for (int ind_h = 0; ind_h < hout; ++ind_h) {
int sh = ind_h * stride_h;
int eh = sh + kernel_h;
sh = (sh - pad_h) < 0 ? 0 : sh - pad_h;
eh = (eh - pad_h) > hin ? hin : eh - pad_h;
int sh, eh;
if (adaptive) {
sh = AdaptStartIndex(ind_h, hin, hout);
eh = AdaptEndIndex(ind_h, hin, hout);
} else {
sh = ind_h * stride_h;
eh = sh + kernel_h;
sh = (sh - pad_h) < 0 ? 0 : sh - pad_h;
eh = (eh - pad_h) > hin ? hin : eh - pad_h;
}
for (int ind_w = 0; ind_w < wout; ++ind_w) {
int sw = ind_w * stride_w;
int ew = sw + kernel_w;
sw = (sw - pad_w) < 0 ? 0 : sw - pad_w;
ew = (ew - pad_w) > win ? win : ew - pad_w;
int sw, ew;
if (adaptive) {
sw = AdaptStartIndex(ind_w, win, wout);
ew = AdaptEndIndex(ind_w, win, wout);
} else {
sw = ind_w * stride_w;
ew = sw + kernel_w;
sw = (sw - pad_w) < 0 ? 0 : sw - pad_w;
ew = (ew - pad_w) > win ? win : ew - pad_w;
}
float result = static_cast<float>(0);
int dst_ind = (ind_n * chout + ind_c) * size_channel_out +
ind_h * wout + ind_w;
......
此差异已折叠。
......@@ -40,6 +40,15 @@ void scale_compute_basic(const operators::ScaleParam& param) {
template <typename T>
void scale(const T* din, T* dout, int num, T scale, T bias);
template <typename T>
void scale_relu(const T* din, T* dout, int num, T scale, T bias);
template <typename T>
void scale_relu6(const T* din, T* dout, int num, T scale, T bias, T alpha);
template <typename T>
void scale_leaky_relu(const T* din, T* dout, int num, T scale, T bias, T alpha);
template <typename T>
void scale(const T* din,
T* dout,
......
......@@ -24,16 +24,17 @@ std::map<int, void*> TargetWrapperBM::bm_hds_;
size_t TargetWrapperBM::num_devices() {
int count = 0;
bm_dev_getcount(&count);
bm_status_t ret = bm_dev_getcount(&count);
CHECK_EQ(ret, BM_SUCCESS) << "Failed with error code: "
<< static_cast<int>(ret);
return count;
}
int TargetWrapperBM::GetDevice() { return device_id_; }
void TargetWrapperBM::SetDevice(int id) {
/*
if (id < 0 || (size_t)id >= num_devices()) {
LOG(FATAL) << "Failed with invalid device id " << id;
}
*/
if (id < 0 || (size_t)id >= num_devices()) {
LOG(FATAL) << "Failed with invalid device id " << id;
}
device_id_ = id;
if (bm_hds_.find(id) == bm_hds_.end()) {
bm_handle_t bm_handle;
......
......@@ -31,6 +31,7 @@ class TargetWrapper<TARGET(kBM)> {
static size_t maximum_stream() { return 0; }
static void SetDevice(int id);
static int GetDevice();
static void CreateStream(stream_t* stream) {}
static void DestroyStream(const stream_t& stream) {}
......
......@@ -28,6 +28,7 @@ namespace lite {
class CLContext {
public:
~CLContext() {
GetCommandQueue().finish();
for (size_t kidx = 0; kidx < kernels_.size(); ++kidx) {
// Note(ysh329): Don't need `clReleaseKernel`
kernels_[kidx].reset();
......
......@@ -100,16 +100,18 @@ TEST(cl_test, kernel_test) {
size_t width = in_image.ImageWidth();
size_t height = in_image.ImageHeight();
auto global_work_size = cl::NDRange{width, height};
cl::Event event;
status = context->GetCommandQueue().enqueueNDRangeKernel(
kernel, cl::NullRange, global_work_size, cl::NullRange, nullptr, &event);
kernel, cl::NullRange, global_work_size, cl::NullRange, nullptr, nullptr);
CL_CHECK_FATAL(status);
status = context->GetCommandQueue().finish();
CL_CHECK_FATAL(status);
#if 0
double start_nanos = event.getProfilingInfo<CL_PROFILING_COMMAND_START>();
double stop_nanos = event.getProfilingInfo<CL_PROFILING_COMMAND_END>();
double elapsed_micros = (stop_nanos - start_nanos) / 1000.0;
LOG(INFO) << "Kernel Run Cost Time: " << elapsed_micros << " us.";
#endif
LOG(INFO) << out_image;
}
......
......@@ -73,7 +73,7 @@ void CLImageConverterDefault::NCHWToImage(float *nchw,
i2 += 4;
p++;
} else {
image[i2] = 0.0;
image[i2] = Float2Half(0.f);
i2 += 4;
}
}
......@@ -261,7 +261,7 @@ void CLImageConverterNWBlock::NCHWToImage(float *tensor,
image[index] = Float2Half(*p);
p++;
} else {
image[index] = 0.0;
image[index] = Float2Half(0.f);
}
if (index >= (width * height * 4)) {
LOG(INFO) << " index out of range ";
......
......@@ -11,7 +11,6 @@ distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
/////////////////////////////////
......@@ -108,7 +107,8 @@ inline CL_DTYPE4 activation_type4(CL_DTYPE4 in
#endif
#ifdef RELU6
output = clamp(in, (CL_DTYPE4)0, (CL_DTYPE4)6);
in = fmax((CL_DTYPE4)(0.0f, 0.0f, 0.0f, 0.0f), in);
output = fmin((CL_DTYPE4)(6.0f, 6.0f, 6.0f, 6.0f), in);
#endif
return output;
}
......@@ -14,36 +14,30 @@ limitations under the License. */
#include <cl_common.h>
__kernel void relu(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
const sampler_t sampler = CLK_NORMALIZED_COORDS_TRUE |
CLK_ADDRESS_CLAMP |
CLK_FILTER_NEAREST;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
in = max((CL_DTYPE4)(0.0f), in);
WRITE_IMG_TYPE(CL_DTYPE_CHAR, output, (int2)(x, y), in);
}
__kernel void relu6(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
__private const float scale){
__private const float scale) {
const int x = get_global_id(0);
const int y = get_global_id(1);
const sampler_t sampler = CLK_NORMALIZED_COORDS_TRUE |
CLK_ADDRESS_CLAMP |
CLK_FILTER_NEAREST;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
in = max((CL_DTYPE4)(0.0f, 0.0f, 0.0f, 0.0f), in);
......@@ -51,7 +45,6 @@ __kernel void relu6(__read_only image2d_t input,
WRITE_IMG_TYPE(CL_DTYPE_CHAR, output, (int2)(x, y), in);
}
__kernel void sigmoid(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
......@@ -64,70 +57,66 @@ __kernel void sigmoid(__read_only image2d_t input,
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
CL_DTYPE4 out;
out.x = 1.0 / (1.0 + pow(2.71828182, -1.0 * (float)(in.x)));
out.y = 1.0 / (1.0 + pow(2.71828182, -1.0 * (float)(in.y)));
out.z = 1.0 / (1.0 + pow(2.71828182, -1.0 * (float)(in.z)));
out.w = 1.0 / (1.0 + pow(2.71828182, -1.0 * (float)(in.w)));
out.x = (CL_DTYPE)(1.0f / (1.0f + pow(2.71828182f, -1.0f * (float)(in.x))));
out.y = (CL_DTYPE)(1.0f / (1.0f + pow(2.71828182f, -1.0f * (float)(in.y))));
out.z = (CL_DTYPE)(1.0f / (1.0f + pow(2.71828182f, -1.0f * (float)(in.z))));
out.w = (CL_DTYPE)(1.0f / (1.0f + pow(2.71828182f, -1.0f * (float)(in.w))));
WRITE_IMG_TYPE(CL_DTYPE_CHAR, output, (int2)(x, y), out);
}
__kernel void leaky_relu(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0);
const int y = get_global_id(1);
const sampler_t sampler = CLK_NORMALIZED_COORDS_TRUE |
CLK_ADDRESS_CLAMP |
CLK_FILTER_NEAREST;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
CL_DTYPE4 s_val = CONVERT_TYPE_TO(scale, CL_DTYPE) * in;
if (in.x < 0.0f){
if (in.x < 0.0f) {
in.x = s_val.x;
}
if (in.y < 0.0f){
if (in.y < 0.0f) {
in.y = s_val.y;
}
if (in.z < 0.0f){
if (in.z < 0.0f) {
in.z = s_val.z;
}
if (in.w < 0.0f){
if (in.w < 0.0f) {
in.w = s_val.w;
}
WRITE_IMG_TYPE(CL_DTYPE_CHAR, output, (int2)(x, y), in);
}
__kernel void tanh_act(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
const sampler_t sampler = CLK_NORMALIZED_COORDS_TRUE |
CLK_ADDRESS_CLAMP |
CLK_FILTER_NEAREST;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
CL_DTYPE4 out= (exp(in) - exp(-in))/ (exp(in) + exp(-in));
CL_DTYPE4 out = (exp(in) - exp(-in)) / (exp(in) + exp(-in));
WRITE_IMG_TYPE(CL_DTYPE_CHAR, output, (int2)(x, y), out);
}
__kernel void exp_act(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
const sampler_t sampler = CLK_NORMALIZED_COORDS_TRUE |
CLK_ADDRESS_CLAMP |
CLK_FILTER_NEAREST;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
CL_DTYPE4 out = exp(in);
......@@ -135,19 +124,16 @@ __kernel void exp_act(__read_only image2d_t input,
}
__kernel void swish(__read_only image2d_t input,
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
__write_only image2d_t output,
__private const float threshold,
__private const float scale) {
const int x = get_global_id(0); // image_width
const int y = get_global_id(1); // image_height
const sampler_t sampler = CLK_NORMALIZED_COORDS_TRUE |
CLK_ADDRESS_CLAMP |
CLK_FILTER_NEAREST;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CL_DTYPE4 in = READ_IMG_TYPE(CL_DTYPE_CHAR, input, sampler, (int2)(x, y));
CL_DTYPE4 out = in / (1 + exp(-(CL_DTYPE)scale * in));
WRITE_IMG_TYPE(CL_DTYPE_CHAR, output, (int2)(x, y), out);
}
#include <cl_common.h>
__kernel void conv2d_1x1_opt(__private const int global_size_dim0,
__private const int global_size_dim1,
__private const int global_size_dim2,
__read_only image2d_t input_image,
__read_only image2d_t filter,
__kernel void conv2d_1x1_opt(
__private const int global_size_dim0,
__private const int global_size_dim1,
__private const int global_size_dim2,
__read_only image2d_t input_image,
__read_only image2d_t filter,
#if defined(BIASE_CH) || defined(BIASE_ELE)
__read_only image2d_t bias,
#endif
#ifdef BATCH_NORM
__read_only image2d_t new_scale,
__read_only image2d_t new_biase,
__read_only image2d_t new_scale,
__read_only image2d_t new_biase,
#endif
__write_only image2d_t output_image,
__private const int stride,
__private const int offset,
__private const int input_c_block,
__private const int input_c_origin,
__private const int dilation,
__private const int input_width, /* of one block */
__private const int input_height, /* of one block */
__private const int output_width,
__private const int output_height,
__private const int old_w) {
__write_only image2d_t output_image,
__private const int stride,
__private const int offset,
__private const int input_c_block,
__private const int input_c_origin,
__private const int dilation,
__private const int input_width, /* of one block */
__private const int input_height, /* of one block */
__private const int output_width,
__private const int output_height,
__private const int old_w) {
const int out_c = get_global_id(0);
const int out_w = get_global_id(1);
......@@ -287,7 +288,7 @@ __kernel void conv2d_1x1_simple(
__read_only image2d_t bias,
#endif
#ifdef BATCH_NORM
__read_only image2d_t new_scale,
__read_only image2d_t new_scale,
__read_only image2d_t new_biase,
#endif
__write_only image2d_t output_image,
......
......@@ -18,7 +18,7 @@ limitations under the License. */
////////////////////////////////////////////////////////
// buffer -> image2d
////////////////////////////////////////////////////////
__kernel void buffer_to_image2d(__global CL_DTYPE *in,
__kernel void buffer_to_image2d(__global CL_DTYPE* in,
__write_only image2d_t output_image,
__private const int out_H,
__private const int out_W,
......@@ -26,7 +26,6 @@ __kernel void buffer_to_image2d(__global CL_DTYPE *in,
__private const int Stride0,
__private const int Stride1,
__private const int Stride2) {
const int out_c = get_global_id(0);
const int out_w = get_global_id(1);
const int out_nh = get_global_id(2);
......@@ -66,16 +65,25 @@ __kernel void buffer_to_image2d(__global CL_DTYPE *in,
#ifdef DEBUG
if (out_w > 2045) {
printf("out_w:%d, out_C - 4 * out_c:%d, input[pos0~pos3]:%.2f %.2f %.2f %.2f\n",
out_w,
out_C - 4 * out_c,
(float)(in[input_pos0]),
(float)(in[input_pos1]),
(float)(in[input_pos2]),
(float)(in[input_pos3]));
printf("buffer2image ===> %d,%d,%d, out(%d,%d): %.2f %.2f %.2f %.2f \n", out_c, out_w, out_nh,
output_pos.x, output_pos.y,
(float)(output.x), (float)(output.y), (float)(output.z), (float)(output.w));
printf(
"out_w:%d, out_C - 4 * out_c:%d, input[pos0~pos3]:%.2f %.2f %.2f "
"%.2f\n",
out_w,
out_C - 4 * out_c,
(float)(in[input_pos0]),
(float)(in[input_pos1]),
(float)(in[input_pos2]),
(float)(in[input_pos3]));
printf("buffer2image ===> %d,%d,%d, out(%d,%d): %.2f %.2f %.2f %.2f \n",
out_c,
out_w,
out_nh,
output_pos.x,
output_pos.y,
(float)(output.x),
(float)(output.y),
(float)(output.z),
(float)(output.w));
}
#endif
......@@ -101,34 +109,42 @@ __kernel void image2d_to_buffer(__read_only image2d_t input,
const int in_h = in_nh % in_height;
const sampler_t sampler =
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
const int pos_x = mad24(in_c, in_width, in_w);
CL_COMPUTE_DTYPE4 in = READ_IMG_TYPE(CL_COMPUTE_DTYPE_CHAR, input, sampler, (int2)(pos_x, in_nh));
CL_COMPUTE_DTYPE4 in = READ_IMG_TYPE(
CL_COMPUTE_DTYPE_CHAR, input, sampler, (int2)(pos_x, in_nh));
#ifdef DEBUG
if (in_w > 2045) {
printf("image2buffer ===> %d,%d,%d, in(%d,%d): %.2f %.2f %.2f %.2f \n", in_c, in_w, in_nh,
pos_x, in_nh,
(float)(in.x), (float)(in.y), (float)(in.z), (float)(in.w));
printf("image2buffer ===> %d,%d,%d, in(%d,%d): %.2f %.2f %.2f %.2f \n",
in_c,
in_w,
in_nh,
pos_x,
in_nh,
(float)(in.x),
(float)(in.y),
(float)(in.z),
(float)(in.w));
}
#endif
const int index = in_n * size_batch + in_c * size_block + in_h * in_width + in_w;
const int index =
in_n * size_batch + in_c * size_block + in_h * in_width + in_w;
out[index] = CONVERT_TYPE_TO(in.x, CL_DTYPE);
if (C - 4 * in_c >= 2) {
out[index + size_ch] = CONVERT_TYPE_TO(in.y, CL_DTYPE);
}
if(C - 4 * in_c >= 3) {
if (C - 4 * in_c >= 3) {
out[index + size_ch * 2] = CONVERT_TYPE_TO(in.z, CL_DTYPE);
}
if(C - 4 * in_c >= 4) {
if (C - 4 * in_c >= 4) {
out[index + size_ch * 3] = CONVERT_TYPE_TO(in.w, CL_DTYPE);
}
}
#if 0 // NOTE(ysh329): keep, un-used from paddle-mobile
#if 0 // NOTE(ysh329): keep, un-used from paddle-mobile
////////////////////////////////////////////////////////
// buffer -> image2d_nw
////////////////////////////////////////////////////////
......@@ -182,8 +198,7 @@ __kernel void buffer_to_image2d_nw(__global CL_DTYPE* in,
}
#endif
#if 0 // NOTE(ysh329): keep, un-used from paddle-mobile
#if 0 // NOTE(ysh329): keep, un-used from paddle-mobile
// image2d -> buffer
__kernel void image2d_to_buffer_2d(__private const int in_height,
__private const int in_width,
......@@ -208,15 +223,14 @@ __kernel void image2d_to_buffer_2d(__private const int in_height,
////////////////////////////////////////////////////////
// buffer -> image2d (divide by 255 to normalize)
////////////////////////////////////////////////////////
__kernel void buffer_to_image2d_with_pre255(__global uchar *in,
__kernel void buffer_to_image2d_with_pre255(__global uchar* in,
__write_only image2d_t output_image,
__private const int out_H,
__private const int out_W,
__private const int out_C,
__private const int Stride0,
__private const int Stride1,
__private const int Stride2){
__private const int Stride2) {
const int out_c = get_global_id(0);
const int out_w = get_global_id(1);
const int out_nh = get_global_id(2);
......@@ -231,7 +245,6 @@ __kernel void buffer_to_image2d_with_pre255(__global uchar *in,
const int in_h = out_h;
const int in_w = out_w;
int input_pos0 = in_n * Stride2 + in_c0 * Stride1 + in_h * Stride0 + in_w;
int input_pos1 = in_n * Stride2 + in_c1 * Stride1 + in_h * Stride0 + in_w;
int input_pos2 = in_n * Stride2 + in_c2 * Stride1 + in_h * Stride0 + in_w;
......@@ -243,30 +256,29 @@ __kernel void buffer_to_image2d_with_pre255(__global uchar *in,
CL_COMPUTE_DTYPE4 output = (CL_COMPUTE_DTYPE4)0.0f;
output.x = CONVERT_TYPE_TO(in[input_pos0], CL_COMPUTE_DTYPE) / 255;
if(out_C - 4 * out_c>=2){
output.y = CONVERT_TYPE_TO(in[input_pos1], CL_COMPUTE_DTYPE) / 255;
if (out_C - 4 * out_c >= 2) {
output.y = CONVERT_TYPE_TO(in[input_pos1], CL_COMPUTE_DTYPE) / 255;
}
if(out_C - 4 * out_c>=3){
output.z = CONVERT_TYPE_TO(in[input_pos2], CL_COMPUTE_DTYPE) / 255;
if (out_C - 4 * out_c >= 3) {
output.z = CONVERT_TYPE_TO(in[input_pos2], CL_COMPUTE_DTYPE) / 255;
}
if(out_C - 4 * out_c>=4){
output.w = CONVERT_TYPE_TO(in[input_pos3], CL_COMPUTE_DTYPE) / 255;
if (out_C - 4 * out_c >= 4) {
output.w = CONVERT_TYPE_TO(in[input_pos3], CL_COMPUTE_DTYPE) / 255;
}
WRITE_IMG_TYPE(CL_COMPUTE_DTYPE_CHAR, output_image, output_pos, output);
}
////////////////////////////////////////////////////////
// image2d -> buffer (multiply by 255 to de-normalize)
////////////////////////////////////////////////////////
__kernel void image2d_to_buffer_with_post255(__read_only image2d_t input,
__private const int in_width,
__private const int in_height,
__global uchar* out,
__private const int size_ch,
__private const int size_block,
__private const int size_batch,
__private const int C) {
__private const int in_width,
__private const int in_height,
__global uchar* out,
__private const int size_ch,
__private const int size_block,
__private const int size_batch,
__private const int C) {
const int in_c = get_global_id(0);
const int in_w = get_global_id(1);
const int in_nh = get_global_id(2);
......@@ -277,22 +289,34 @@ __kernel void image2d_to_buffer_with_post255(__read_only image2d_t input,
CLK_NORMALIZED_COORDS_TRUE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
const int pos_x = mad24(in_c, in_width, in_w);
CL_COMPUTE_DTYPE4 in = READ_IMG_TYPE(CL_COMPUTE_DTYPE_CHAR, input, sampler, (int2)(pos_x, in_nh)) * 255;
CL_COMPUTE_DTYPE4 in =
READ_IMG_TYPE(
CL_COMPUTE_DTYPE_CHAR, input, sampler, (int2)(pos_x, in_nh)) *
255;
#ifdef DEBUG
printf("in_c:%d, in_w:%d, in_nh:%d ===> in(%d,%d): %.2f %.2f %.2f %.2f\n",
in_c, in_w, in_nh, pos_x, in_nh, in.x, in.y, in.z, in.w);
in_c,
in_w,
in_nh,
pos_x,
in_nh,
in.x,
in.y,
in.z,
in.w);
#endif
const int index = in_n * size_batch + in_c * size_block + in_h * in_width + in_w;
const int index =
in_n * size_batch + in_c * size_block + in_h * in_width + in_w;
out[index] = convert_uchar_sat(in.x);
if(C - 4 * in_c>=2){
if (C - 4 * in_c >= 2) {
out[index + size_ch] = convert_uchar_sat(in.y);
}
if(C - 4 * in_c>=3){
if (C - 4 * in_c >= 3) {
out[index + size_ch * 2] = convert_uchar_sat(in.z);
}
if(C - 4 * in_c>=4){
if (C - 4 * in_c >= 4) {
out[index + size_ch * 3] = convert_uchar_sat(in.w);
}
}
......@@ -45,6 +45,9 @@ bool CLRuntime::Init() {
bool is_device_init = InitializeDevice();
is_init_success_ = is_platform_init && is_device_init;
initialized_ = true;
context_ = CreateContext();
command_queue_ = CreateCommandQueue(context());
return initialized_;
}
......@@ -55,7 +58,7 @@ cl::Platform& CLRuntime::platform() {
cl::Context& CLRuntime::context() {
if (context_ == nullptr) {
context_ = CreateContext();
LOG(FATAL) << "context_ create failed. ";
}
return *context_;
}
......@@ -67,7 +70,7 @@ cl::Device& CLRuntime::device() {
cl::CommandQueue& CLRuntime::command_queue() {
if (command_queue_ == nullptr) {
command_queue_ = CreateCommandQueue(context());
LOG(FATAL) << "command_queue_ create failed. ";
}
return *command_queue_;
}
......@@ -96,7 +99,7 @@ std::unique_ptr<cl::UserEvent> CLRuntime::CreateEvent(
bool CLRuntime::BuildProgram(cl::Program* program, const std::string& options) {
/* -I +CLRuntime::Global()->cl_path() + "/cl_kernel"*/
std::string build_option = options + " -cl-fast-relaxed-math ";
std::string build_option = options + " -cl-fast-relaxed-math -cl-mad-enable";
VLOG(4) << "OpenCL build_option: " << build_option;
status_ = program->build({*device_}, build_option.c_str());
CL_CHECK_ERROR(status_);
......@@ -126,6 +129,26 @@ bool CLRuntime::InitializePlatform() {
return true;
}
GpuType CLRuntime::ParseGpuTypeFromDeviceName(std::string device_name) {
const std::string kMALI_PATTERN_STR = "Mali";
const std::string kADRENO_PATTERN_STR = "QUALCOMM Adreno(TM)";
const std::string kPOWERVR_PATTERN_STR = "PowerVR";
if (device_name == kADRENO_PATTERN_STR) {
LOG(INFO) << "adreno gpu";
return GpuType::QUALCOMM_ADRENO;
} else if (device_name.find(kMALI_PATTERN_STR) != std::string::npos) {
LOG(INFO) << "mali gpu";
return GpuType::ARM_MALI;
} else if (device_name.find(kPOWERVR_PATTERN_STR) != std::string::npos) {
LOG(INFO) << "powerVR gpu";
return GpuType::IMAGINATION_POWERVR;
} else {
LOG(INFO) << "others gpu";
return GpuType::UNKNOWN;
}
}
bool CLRuntime::InitializeDevice() {
// ===================== BASIC =====================
// CL_DEVICE_TYPE_GPU
......@@ -145,6 +168,7 @@ bool CLRuntime::InitializeDevice() {
auto device_name = device_->getInfo<CL_DEVICE_NAME>();
LOG(INFO) << "Using device: " << device_name;
gpu_type_ = ParseGpuTypeFromDeviceName(device_name);
cl_device_type device_type = device_->getInfo<CL_DEVICE_TYPE>();
auto device_type_to_str = [](cl_device_type t) -> std::string {
......@@ -293,5 +317,53 @@ std::map<std::string, size_t>& CLRuntime::GetDeviceInfo() {
return device_info_;
}
void CLRuntime::GetAdrenoContextProperties(
std::vector<cl_context_properties>* properties,
GPUPerfMode gpu_perf_mode,
GPUPriorityLevel gpu_priority_level) {
CHECK(properties) << "cl_context_properties is nullptr";
properties->reserve(5);
switch (gpu_perf_mode) {
case GPUPerfMode::PERF_LOW:
LOG(INFO) << "GPUPerfMode::PERF_LOW";
properties->push_back(CL_CONTEXT_PERF_MODE_QCOM);
properties->push_back(CL_PERF_MODE_LOW_QCOM);
break;
case GPUPerfMode::PERF_NORMAL:
LOG(INFO) << "GPUPerfMode::PERF_NORMAL";
properties->push_back(CL_CONTEXT_PERF_MODE_QCOM);
properties->push_back(CL_PERF_MODE_NORMAL_QCOM);
break;
case GPUPerfMode::PERF_HIGH:
LOG(INFO) << "GPUPerfMode::PERF_HIGH";
properties->push_back(CL_CONTEXT_PERF_MODE_QCOM);
properties->push_back(CL_PERF_MODE_HIGH_QCOM);
break;
default:
break;
}
switch (gpu_priority_level) {
case GPUPriorityLevel::PRIORITY_LOW:
LOG(INFO) << "GPUPriorityLevel::PRIORITY_LOW";
properties->push_back(CL_CONTEXT_PRIORITY_LEVEL_QCOM);
properties->push_back(CL_PRIORITY_HINT_LOW_QCOM);
break;
case GPUPriorityLevel::PRIORITY_NORMAL:
LOG(INFO) << "GPUPriorityLevel::PRIORITY_NORMAL";
properties->push_back(CL_CONTEXT_PRIORITY_LEVEL_QCOM);
properties->push_back(CL_PRIORITY_HINT_NORMAL_QCOM);
break;
case GPUPriorityLevel::PRIORITY_HIGH:
LOG(INFO) << "GPUPriorityLevel::PRIORITY_HIGH";
properties->push_back(CL_CONTEXT_PRIORITY_LEVEL_QCOM);
properties->push_back(CL_PRIORITY_HINT_HIGH_QCOM);
break;
default:
break;
}
// The properties list should be terminated with 0
properties->push_back(0);
}
} // namespace lite
} // namespace paddle
此差异已折叠。
......@@ -32,7 +32,7 @@ const char* opencl_error_to_str(cl_int error);
__FILE__, \
__LINE__); \
}
#ifndef LITE_SHUTDOWN_LOG
#ifdef LITE_WITH_LOG
#define CL_CHECK_FATAL(err_code__) \
if (err_code__ != CL_SUCCESS) { \
LOG(FATAL) << string_format( \
......
......@@ -66,7 +66,8 @@ void *TargetWrapperCL::MallocImage<float>(const size_t cl_image2d_width,
cl_int status;
cl::Image2D *cl_image =
new cl::Image2D(CLRuntime::Global()->context(),
CL_MEM_READ_WRITE | (host_ptr ? CL_MEM_COPY_HOST_PTR : 0),
CL_MEM_READ_WRITE | (host_ptr ? CL_MEM_COPY_HOST_PTR
: CL_MEM_ALLOC_HOST_PTR),
img_format,
cl_image2d_width,
cl_image2d_height,
......@@ -89,7 +90,8 @@ void *TargetWrapperCL::MallocImage<uint16_t>(const size_t cl_image2d_width,
cl_int status;
cl::Image2D *cl_image =
new cl::Image2D(CLRuntime::Global()->context(),
CL_MEM_READ_WRITE | (host_ptr ? CL_MEM_COPY_HOST_PTR : 0),
CL_MEM_READ_WRITE | (host_ptr ? CL_MEM_COPY_HOST_PTR
: CL_MEM_ALLOC_HOST_PTR),
img_format,
cl_image2d_width,
cl_image2d_height,
......@@ -112,7 +114,8 @@ void *TargetWrapperCL::MallocImage<int32_t>(const size_t cl_image2d_width,
cl_int status;
cl::Image2D *cl_image =
new cl::Image2D(CLRuntime::Global()->context(),
CL_MEM_READ_WRITE | (host_ptr ? CL_MEM_COPY_HOST_PTR : 0),
CL_MEM_READ_WRITE | (host_ptr ? CL_MEM_COPY_HOST_PTR
: CL_MEM_ALLOC_HOST_PTR),
img_format,
cl_image2d_width,
cl_image2d_height,
......@@ -192,7 +195,6 @@ void TargetWrapperCL::MemcpySync(void *dst,
size_t size,
IoDirection dir) {
cl_int status;
cl::Event event;
auto stream = CLRuntime::Global()->command_queue();
switch (dir) {
case IoDirection::DtoD:
......@@ -202,9 +204,9 @@ void TargetWrapperCL::MemcpySync(void *dst,
0,
size,
nullptr,
&event);
nullptr);
CL_CHECK_FATAL(status);
event.wait();
CLRuntime::Global()->command_queue().finish();
break;
case IoDirection::HtoD:
status = stream.enqueueWriteBuffer(*static_cast<cl::Buffer *>(dst),
......@@ -283,7 +285,6 @@ void TargetWrapperCL::ImgcpySync(void *dst,
cl::array<size_t, 3> origin = {0, 0, 0};
cl::array<size_t, 3> region = {cl_image2d_width, cl_image2d_height, 1};
cl_int status;
cl::Event event;
auto stream = CLRuntime::Global()->command_queue();
switch (dir) {
case IoDirection::DtoD:
......@@ -293,9 +294,9 @@ void TargetWrapperCL::ImgcpySync(void *dst,
origin,
region,
nullptr,
&event);
nullptr);
CL_CHECK_FATAL(status);
event.wait();
CLRuntime::Global()->command_queue().finish();
break;
case IoDirection::HtoD:
status = stream.enqueueWriteImage(*static_cast<cl::Image2D *>(dst),
......
......@@ -129,8 +129,7 @@ struct RowwiseAdd<lite::TargetType::kX86, T> {
T* output_data = output->template mutable_data<T>();
for (int64_t i = 0; i < in_dims[0]; ++i) {
for (int64_t j = 0; j < size; ++j) {
output_data[i * in_dims[0] + j] =
input_data[i * in_dims[0] + j] + vector_data[j];
output_data[i * size + j] = input_data[i * size + j] + vector_data[j];
}
}
}
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册