README.md 4.4 KB
Newer Older
Z
Zero King 已提交
1 2
<div align="center">
<img src="docs/mace-logo.png" width="400" alt="MACE" />
L
Liangliang He 已提交
3 4 5
</div>


L
Liangliang He 已提交
6
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
7
[![pipeline status](https://gitlab.com/llhe/mace/badges/master/pipeline.svg)](https://gitlab.com/llhe/mace/pipelines)
L
Liangliang He 已提交
8
[![doc build status](https://readthedocs.org/projects/mace/badge/?version=latest)](https://readthedocs.org/projects/mace/badge/?version=latest)
9
[![Build Status](https://travis-ci.org/travis-ci/travis-web.svg?branch=master)](https://travis-ci.org/travis-ci/travis-web)
L
liuqi 已提交
10

L
Liangliang He 已提交
11 12
[Documentation](https://mace.readthedocs.io) |
[FAQ](https://mace.readthedocs.io/en/latest/faq.html) |
L
Liangliang He 已提交
13
[Release Notes](RELEASE.md) |
L
Liangliang He 已提交
14
[Roadmap](ROADMAP.md) |
L
Liangliang He 已提交
15 16
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) |
[Demo](mace/examples/android) |
L
Liangliang He 已提交
17
[中文](README_zh.md)
L
liuqi 已提交
18

L
Liangliang He 已提交
19
**Mobile AI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for
L
liutuo 已提交
20
mobile heterogeneous computing platforms. The design focuses on the following
L
Liangliang He 已提交
21 22
targets:
* Performance
23
  * Runtime is optimized with NEON, OpenCL and Hexagon, and
L
Liangliang He 已提交
24
    [Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
25
    speed up convolution operations. The initialization is also optimized to be faster.
L
Liangliang He 已提交
26
* Power consumption
L
Liangliang He 已提交
27
  * Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
L
liutuo 已提交
28
    included as advanced APIs.
L
Liangliang He 已提交
29
* Responsiveness
L
liutuo 已提交
30
  * UI responsiveness guarantee is sometimes obligatory when running a model.
L
Liangliang He 已提交
31 32
    Mechanism like automatically breaking OpenCL kernel into small units is
    introduced to allow better preemption for the UI rendering task.
L
Liangliang He 已提交
33
* Memory usage and library footprint
L
liutuo 已提交
34
  * Graph level memory allocation optimization and buffer reuse are supported.
Z
Zero King 已提交
35
    The core library tries to keep minimum external dependencies to keep the
L
Liangliang He 已提交
36
    library footprint small.
L
Liangliang He 已提交
37
* Model protection
38
  * Model protection has been the highest priority since the beginning of 
Z
Zero King 已提交
39
    the design. Various techniques are introduced like converting models to C++
L
Liangliang He 已提交
40 41
    code and literal obfuscations.
* Platform coverage
42
  * Good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
L
liutuo 已提交
43
    chips. CPU runtime is also compatible with most POSIX systems and
Z
Zero King 已提交
44
    architectures with limited performance.
L
liuqi 已提交
45

L
Liangliang He 已提交
46
## Getting Started
L
Liangliang He 已提交
47 48 49
* [Introduction](https://mace.readthedocs.io/en/latest/getting_started/introduction.html)
* [Create a model deployment file](https://mace.readthedocs.io/en/latest/getting_started/create_a_model_deployment.html)
* [How to build](https://mace.readthedocs.io/en/latest/getting_started/how_to_build.html)
50

L
Liangliang He 已提交
51
## Performance
L
Liangliang He 已提交
52
[MACE Model Zoo](https://github.com/XiaoMi/mace-models) contains
L
liutuo 已提交
53
several common neural networks and models which will be built daily against a list of mobile
L
Liangliang He 已提交
54 55
phones. The benchmark results can be found in [the CI result page](https://gitlab.com/llhe/mace-models/pipelines)
(choose the latest passed pipeline, click *release* step and you will see the benchmark results).
L
Liangliang He 已提交
56 57 58

## Communication
* GitHub issues: bug reports, usage issues, feature requests
L
Liangliang He 已提交
59
* Slack: [mace-users.slack.com](https://join.slack.com/t/mace-users/shared_invite/enQtMzkzNjM3MzMxODYwLTAyZTAzMzQyNjc0ZGI5YjU3MjI1N2Q2OWI1ODgwZjAwOWVlNzFlMjFmMTgwYzhjNzU4MDMwZWQ1MjhiM2Y4OTE)
L
Liangliang He 已提交
60
* QQ群: 756046893
L
Liangliang He 已提交
61 62

## Contributing
63 64
Any kind of contribution is welcome. For bug reports, feature requests,
please just open an issue without any hesitation. For code contributions, it's
L
Liangliang He 已提交
65
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
66
please refer to [the contribution guide](https://mace.readthedocs.io/en/latest/development/contributing.html).
L
Liangliang He 已提交
67 68 69 70 71

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
Z
Zero King 已提交
72
MACE depends on several open source projects located in the
L
Liangliang He 已提交
73
[third_party](third_party) directory. Particularly, we learned a lot from
L
Liangliang He 已提交
74
the following projects during the development:
L
Liangliang He 已提交
75
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
76 77 78 79 80 81 82 83 84
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
Z
Zero King 已提交
85
their help.