README.md 3.1 KB
Newer Older
L
Liangliang He 已提交
1 2
# MiAI Compute Engine
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
3
[![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines)
L
liuqi 已提交
4

L
Liangliang He 已提交
5 6 7 8 9
[Documentation](docs) |
[FAQ](docs/faq.md) |
[Release Notes](RELEASE.md) |
[MiAI Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) |
[Demo](mace/android)
L
liuqi 已提交
10

L
Liangliang He 已提交
11 12 13 14
**MiAI Compute Engine** is a deep learning inference framework optimized for
mobile heterogeneous computing platforms. The design is focused on the following
targets:
* Performance
L
Liangliang He 已提交
15
  * The runtime is highly optimized with NEON, OpenCL and Hexagon. Except for the
L
Liangliang He 已提交
16 17 18 19 20 21 22 23 24 25 26 27 28
    inference speed, the initialization speed is also intensively optimized.
* Power consumption
  * Chip dependent power options are included as advanced API.
* Memory usage and library footprint
  * Graph level memory allocation optimization and buffer reuse is supported.
* Model protection
  * Model protection is one the highest priority feature from the beginning of 
    the design. Various techniques are introduced like coverting models to C++
    code and literal obfuscations.
* Platform coverage
  * A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
    chips. CPU runtime is also compitable with most POSIX systems and
    archetectures with limited performance.
L
liuqi 已提交
29

L
Liangliang He 已提交
30
## Getting Started
L
Liangliang He 已提交
31 32 33
* [Introduction](docs/getting_started/introduction.rst)
* [How to build](docs/getting_started/how_to_build.rst)
* [Create a model deployment file](docs/getting_started/create_a_model_deployment.rst)
34

L
Liangliang He 已提交
35
## Performance
L
Liangliang He 已提交
36 37
[MiAI Compute Engine Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) contains
several common neural networks models and built daily against a list of mobile
L
Liangliang He 已提交
38 39 40 41
phones. The benchmark result can be found in the CI result page.

## Communication
* GitHub issues: bug reports, usage issues, feature requests
L
Liangliang He 已提交
42 43
* Gitter:
* QQ群: 756046893
L
Liangliang He 已提交
44 45 46 47 48

## Contributing
Any kind of contributions are welcome. For bug reports, feature requests,
please just open an issue without any hesitance. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
49
please refer to [the contribution guide](docs/development/contributing.md).
L
Liangliang He 已提交
50 51 52 53 54

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
L
Liangliang He 已提交
55
MiAI Compute Engine depends on several open source projects located in
L
Liangliang He 已提交
56 57
[third_party](mace/third_party) directory. Particularly, we learned a lot from
the following projects during the development:
L
Liangliang He 已提交
58
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
59 60 61 62 63 64 65 66 67 68
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
their helps.