README.md 3.8 KB
Newer Older
L
Liangliang He 已提交
1
# Mobile AI Compute Engine
L
Liangliang He 已提交
2
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
3
[![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines)
L
liuqi 已提交
4

L
Liangliang He 已提交
5 6 7
[Documentation](docs) |
[FAQ](docs/faq.md) |
[Release Notes](RELEASE.md) |
L
Liangliang He 已提交
8 9
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) |
[Demo](mace/examples/android) |
L
Liangliang He 已提交
10
[中文](README_zh.md)
L
liuqi 已提交
11

L
Liangliang He 已提交
12
**Mobile AI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for
L
liutuo 已提交
13
mobile heterogeneous computing platforms. The design focuses on the following
L
Liangliang He 已提交
14 15
targets:
* Performance
L
Liangliang He 已提交
16 17
  * The runtime is highly optimized with NEON, OpenCL and Hexagon, and
    [Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
L
liutuo 已提交
18 19
    speed up the convolution operations. Besides the fast inference speed, the
    initialization part is also intensively optimized to be faster.
L
Liangliang He 已提交
20
* Power consumption
L
Liangliang He 已提交
21
  * Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
L
liutuo 已提交
22
    included as advanced APIs.
L
Liangliang He 已提交
23
* Responsiveness
L
liutuo 已提交
24
  * UI responsiveness guarantee is sometimes obligatory when running a model.
L
Liangliang He 已提交
25 26
    Mechanism like automatically breaking OpenCL kernel into small units is
    introduced to allow better preemption for the UI rendering task.
L
Liangliang He 已提交
27
* Memory usage and library footprint
L
liutuo 已提交
28
  * Graph level memory allocation optimization and buffer reuse are supported.
L
Liangliang He 已提交
29 30
    The core library tries to keep minium external dependencies to keep the
    library footprint small.
L
Liangliang He 已提交
31
* Model protection
L
liutuo 已提交
32
  * Model protection is the highest priority feature from the beginning of 
L
Liangliang He 已提交
33 34 35 36
    the design. Various techniques are introduced like coverting models to C++
    code and literal obfuscations.
* Platform coverage
  * A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
L
liutuo 已提交
37
    chips. CPU runtime is also compatible with most POSIX systems and
L
Liangliang He 已提交
38
    archetectures with limited performance.
L
liuqi 已提交
39

L
Liangliang He 已提交
40
## Getting Started
L
Liangliang He 已提交
41 42 43
* [Introduction](docs/getting_started/introduction.rst)
* [How to build](docs/getting_started/how_to_build.rst)
* [Create a model deployment file](docs/getting_started/create_a_model_deployment.rst)
44

L
Liangliang He 已提交
45
## Performance
L
Liangliang He 已提交
46
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) contains
L
liutuo 已提交
47
several common neural networks and models which will be built daily against a list of mobile
L
Liangliang He 已提交
48 49 50 51
phones. The benchmark result can be found in the CI result page.

## Communication
* GitHub issues: bug reports, usage issues, feature requests
52 53
* Mailing list: [mace-users@googlegroups.com](mailto:mace-users@googlegroups.com)
* Google groups: https://groups.google.com/forum/#!forum/mace-users
L
Liangliang He 已提交
54
* QQ群: 756046893
L
Liangliang He 已提交
55 56 57 58 59

## Contributing
Any kind of contributions are welcome. For bug reports, feature requests,
please just open an issue without any hesitance. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
60
please refer to [the contribution guide](docs/development/contributing.md).
L
Liangliang He 已提交
61 62 63 64 65

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
L
Liangliang He 已提交
66
MACE depends on several open source projects located in
L
Liangliang He 已提交
67
[third_party](third_party) directory. Particularly, we learned a lot from
L
Liangliang He 已提交
68
the following projects during the development:
L
Liangliang He 已提交
69
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
70 71 72 73 74 75 76 77 78 79
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
their helps.