Inference Acceleration with Mobile GPU
Created by: hedaoyuan
Mobile GPU
Currently, in the mainstream mobile phone will have a GPU, and mobile GPU performance has been greatly improved in recent years. As you can see from the data in these links, Adreno 540 vs Adreno 530 vs Adreno 430, Adreno 430 vs Adreno 420, the Adreno 430 has 30% performance increase over the Adreno 420, the Adreno 530 has 30%-40% performance increase over the Adreno 430, and the Adreno 540(Release in Q2 2017) has 30%-40% performance increase over the Adreno 530. From Adreno WIKI also can see the corresponding trend.
In addition, mobile GPU has also been greatly improved in computational performance for deep learning. As you can see from this example Matrix Multiply on Adreno GPUs, based on OpenCL's matrix multiplication optimization, the performance on the Adreno 420, Adreno 430 and Adreno 530 is 44 ms, 38 ms, 23 ms for the 1024-size matrix, respectively. And with the Snapdragon NPE's GPU acceleration, in some case can achieve 5x better performance on the Adreno GPU, compared to a generic CPU implementation.
Why OpenCL
We consider using OpenCL to support Android GPU, mainly based on the following considerations.
- OpenCL is based on the standard C/C++ language and doesn't need to rely on a special compiler.
- All the mainstream GPUs support the development based on OpenCL, and OpenCL is also a mature solution.
- The framework(wrapper) developed based on OpenCL, can be used to support the GPU(AMD GPU) on the server for model training acceleration.
- Using OpenCL allows to directly interoperate with some other OpenCL libraries(like Eigen, ARM ComputeLibrary).