README.md 3.8 KB
Newer Older
L
Liangliang He 已提交
1 2 3 4 5
<div  align="center">
<img src="docs/mace-logo.png" width = "400" alt="MACE" />
</div>


L
Liangliang He 已提交
6
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
7
[![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines)
L
liuqi 已提交
8

L
Liangliang He 已提交
9 10 11
[Documentation](docs) |
[FAQ](docs/faq.md) |
[Release Notes](RELEASE.md) |
L
Liangliang He 已提交
12
[Roadmap](ROADMAP.md) |
L
Liangliang He 已提交
13 14
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) |
[Demo](mace/examples/android) |
L
Liangliang He 已提交
15
[中文](README_zh.md)
L
liuqi 已提交
16

L
Liangliang He 已提交
17
**Mobile AI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for
L
liutuo 已提交
18
mobile heterogeneous computing platforms. The design focuses on the following
L
Liangliang He 已提交
19 20
targets:
* Performance
L
Liangliang He 已提交
21 22
  * The runtime is highly optimized with NEON, OpenCL and Hexagon, and
    [Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
L
liutuo 已提交
23 24
    speed up the convolution operations. Besides the fast inference speed, the
    initialization part is also intensively optimized to be faster.
L
Liangliang He 已提交
25
* Power consumption
L
Liangliang He 已提交
26
  * Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
L
liutuo 已提交
27
    included as advanced APIs.
L
Liangliang He 已提交
28
* Responsiveness
L
liutuo 已提交
29
  * UI responsiveness guarantee is sometimes obligatory when running a model.
L
Liangliang He 已提交
30 31
    Mechanism like automatically breaking OpenCL kernel into small units is
    introduced to allow better preemption for the UI rendering task.
L
Liangliang He 已提交
32
* Memory usage and library footprint
L
liutuo 已提交
33
  * Graph level memory allocation optimization and buffer reuse are supported.
L
Liangliang He 已提交
34 35
    The core library tries to keep minium external dependencies to keep the
    library footprint small.
L
Liangliang He 已提交
36
* Model protection
L
liutuo 已提交
37
  * Model protection is the highest priority feature from the beginning of 
L
Liangliang He 已提交
38 39 40 41
    the design. Various techniques are introduced like coverting models to C++
    code and literal obfuscations.
* Platform coverage
  * A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
L
liutuo 已提交
42
    chips. CPU runtime is also compatible with most POSIX systems and
L
Liangliang He 已提交
43
    archetectures with limited performance.
L
liuqi 已提交
44

L
Liangliang He 已提交
45
## Getting Started
L
Liangliang He 已提交
46 47 48
* [Introduction](docs/getting_started/introduction.rst)
* [How to build](docs/getting_started/how_to_build.rst)
* [Create a model deployment file](docs/getting_started/create_a_model_deployment.rst)
49

L
Liangliang He 已提交
50
## Performance
L
Liangliang He 已提交
51
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) contains
L
liutuo 已提交
52
several common neural networks and models which will be built daily against a list of mobile
L
Liangliang He 已提交
53 54 55 56
phones. The benchmark result can be found in the CI result page.

## Communication
* GitHub issues: bug reports, usage issues, feature requests
57 58
* Mailing list: [mace-users@googlegroups.com](mailto:mace-users@googlegroups.com)
* Google groups: https://groups.google.com/forum/#!forum/mace-users
L
Liangliang He 已提交
59
* QQ群: 756046893
L
Liangliang He 已提交
60 61 62 63 64

## Contributing
Any kind of contributions are welcome. For bug reports, feature requests,
please just open an issue without any hesitance. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
65
please refer to [the contribution guide](docs/development/contributing.md).
L
Liangliang He 已提交
66 67 68 69 70

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
L
Liangliang He 已提交
71
MACE depends on several open source projects located in
L
Liangliang He 已提交
72
[third_party](third_party) directory. Particularly, we learned a lot from
L
Liangliang He 已提交
73
the following projects during the development:
L
Liangliang He 已提交
74
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
75 76 77 78 79 80 81 82 83 84
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
their helps.