README.md 4.3 KB
Newer Older
Z
Zero King 已提交
1 2
<div align="center">
<img src="docs/mace-logo.png" width="400" alt="MACE" />
L
Liangliang He 已提交
3 4 5
</div>


L
Liangliang He 已提交
6
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
7
[![pipeline status](https://gitlab.com/llhe/mace/badges/master/pipeline.svg)](https://gitlab.com/llhe/mace/pipelines)
L
Liangliang He 已提交
8
[![doc build status](https://readthedocs.org/projects/mace/badge/?version=latest)](https://readthedocs.org/projects/mace/badge/?version=latest)
L
liuqi 已提交
9

L
Liangliang He 已提交
10 11
[Documentation](https://mace.readthedocs.io) |
[FAQ](https://mace.readthedocs.io/en/latest/faq.html) |
L
Liangliang He 已提交
12
[Release Notes](RELEASE.md) |
L
Liangliang He 已提交
13
[Roadmap](ROADMAP.md) |
L
Liangliang He 已提交
14 15
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) |
[Demo](mace/examples/android) |
L
Liangliang He 已提交
16
[中文](README_zh.md)
L
liuqi 已提交
17

L
Liangliang He 已提交
18
**Mobile AI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for
L
liutuo 已提交
19
mobile heterogeneous computing platforms. The design focuses on the following
L
Liangliang He 已提交
20 21
targets:
* Performance
L
Liangliang He 已提交
22 23
  * The runtime is highly optimized with NEON, OpenCL and Hexagon, and
    [Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
L
liutuo 已提交
24 25
    speed up the convolution operations. Besides the fast inference speed, the
    initialization part is also intensively optimized to be faster.
L
Liangliang He 已提交
26
* Power consumption
L
Liangliang He 已提交
27
  * Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
L
liutuo 已提交
28
    included as advanced APIs.
L
Liangliang He 已提交
29
* Responsiveness
L
liutuo 已提交
30
  * UI responsiveness guarantee is sometimes obligatory when running a model.
L
Liangliang He 已提交
31 32
    Mechanism like automatically breaking OpenCL kernel into small units is
    introduced to allow better preemption for the UI rendering task.
L
Liangliang He 已提交
33
* Memory usage and library footprint
L
liutuo 已提交
34
  * Graph level memory allocation optimization and buffer reuse are supported.
Z
Zero King 已提交
35
    The core library tries to keep minimum external dependencies to keep the
L
Liangliang He 已提交
36
    library footprint small.
L
Liangliang He 已提交
37
* Model protection
L
liutuo 已提交
38
  * Model protection is the highest priority feature from the beginning of 
Z
Zero King 已提交
39
    the design. Various techniques are introduced like converting models to C++
L
Liangliang He 已提交
40 41 42
    code and literal obfuscations.
* Platform coverage
  * A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
L
liutuo 已提交
43
    chips. CPU runtime is also compatible with most POSIX systems and
Z
Zero King 已提交
44
    architectures with limited performance.
L
liuqi 已提交
45

L
Liangliang He 已提交
46
## Getting Started
L
Liangliang He 已提交
47 48 49
* [Introduction](https://mace.readthedocs.io/en/latest/getting_started/introduction.html)
* [Create a model deployment file](https://mace.readthedocs.io/en/latest/getting_started/create_a_model_deployment.html)
* [How to build](https://mace.readthedocs.io/en/latest/getting_started/how_to_build.html)
50

L
Liangliang He 已提交
51
## Performance
L
Liangliang He 已提交
52
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) contains
L
liutuo 已提交
53
several common neural networks and models which will be built daily against a list of mobile
L
Liangliang He 已提交
54 55
phones. The benchmark results can be found in [the CI result page](https://gitlab.com/llhe/mace-models/pipelines)
(choose the latest passed pipeline, click *release* step and you will see the benchmark results).
L
Liangliang He 已提交
56 57 58

## Communication
* GitHub issues: bug reports, usage issues, feature requests
59
* Mailing list: [mace-users@googlegroups.com](mailto:mace-users@googlegroups.com)
Z
Zero King 已提交
60
* Google Groups: https://groups.google.com/forum/#!forum/mace-users
L
Liangliang He 已提交
61
* QQ群: 756046893
L
Liangliang He 已提交
62 63 64 65 66

## Contributing
Any kind of contributions are welcome. For bug reports, feature requests,
please just open an issue without any hesitance. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
67
please refer to [the contribution guide](https://mace.readthedocs.io/en/latest/development/contributing.html).
L
Liangliang He 已提交
68 69 70 71 72

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
Z
Zero King 已提交
73
MACE depends on several open source projects located in the
L
Liangliang He 已提交
74
[third_party](third_party) directory. Particularly, we learned a lot from
L
Liangliang He 已提交
75
the following projects during the development:
L
Liangliang He 已提交
76
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
77 78 79 80 81 82 83 84 85
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
Z
Zero King 已提交
86
their help.