AKG(Auto Kernel Generator) is an optimizer for operators in Deep Learning Networks. It provides the ability to automatically fuse ops with specific patterns. AKG works with MindSpore-GraphKernel to improve the performance of networks running on different hardware backends.
AKG composes with four basic optimization module, normalization, auto schedule, instruction emit and backend optimization.
-**normalization.**The mainly optimization of normalization includes three address transform, common subexpression elimination, copy propagation and so on.
-**auto schedule.**The auto schedule module mainly have vectorization, loop tiling, mem promotion and loop distribution.
-**normalization.**In order to solve the limitation in expression ability of polyhedral(which can only process static linear programs), the computation IR needs to be normalized first. The mainly optimization of normalization module includes auto-inline, loop partition, common subexpression elimination and so on.
-**auto schedule.**Base on polyhedral technology, the auto schedule module mainly have auto-vectorization, auto-tiling, dependency analysis and memory promotion.
-**instruction emit.** The instruction emitting module has the optimization about loop normalization, auto pragma and emit instruction.
-**backend optimization.** The backend optimization module consists of double buffer optimization, storage rewrite optimization and inject sync optimization.
...
...
@@ -31,7 +33,6 @@ See [MindSpore README.md](https://gitee.com/mindspore/mindspore/blob/master/READ
We suggest you build and run akg together with MindSpore. And we also provide a way to run case in standalone mode for convenience sake.
Ascend platform is needed to build this mode. Refer to [MindSpore Installation](https://www.mindspore.cn/install/en) for more information about compilation dependencies.