README.md 3.4 KB
Newer Older
L
Liangliang He 已提交
1 2
# MiAI Compute Engine
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
3
[![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines)
L
liuqi 已提交
4

L
Liangliang He 已提交
5 6 7 8 9
[Documentation](docs) |
[FAQ](docs/faq.md) |
[Release Notes](RELEASE.md) |
[MiAI Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) |
[Demo](mace/android)
L
liuqi 已提交
10

L
Liangliang He 已提交
11 12 13 14
**MiAI Compute Engine** is a deep learning inference framework optimized for
mobile heterogeneous computing platforms. The design is focused on the following
targets:
* Performance
L
Liangliang He 已提交
15 16 17 18
  * The runtime is highly optimized with NEON, OpenCL and Hexagon, and
    [Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
    speed up the convolution operations. Except for the inference speed, the
    initialization speed is also intensively optimized.
L
Liangliang He 已提交
19
* Power consumption
L
Liangliang He 已提交
20 21
  * Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
    included as advanced API.
L
Liangliang He 已提交
22 23
* Memory usage and library footprint
  * Graph level memory allocation optimization and buffer reuse is supported.
L
Liangliang He 已提交
24 25
    The core library tries to keep minium external dependencies to keep the
    library footprint small.
L
Liangliang He 已提交
26 27 28 29 30 31 32 33
* Model protection
  * Model protection is one the highest priority feature from the beginning of 
    the design. Various techniques are introduced like coverting models to C++
    code and literal obfuscations.
* Platform coverage
  * A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
    chips. CPU runtime is also compitable with most POSIX systems and
    archetectures with limited performance.
L
liuqi 已提交
34

L
Liangliang He 已提交
35
## Getting Started
L
Liangliang He 已提交
36 37 38
* [Introduction](docs/getting_started/introduction.rst)
* [How to build](docs/getting_started/how_to_build.rst)
* [Create a model deployment file](docs/getting_started/create_a_model_deployment.rst)
39

L
Liangliang He 已提交
40
## Performance
L
Liangliang He 已提交
41 42
[MiAI Compute Engine Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) contains
several common neural networks models and built daily against a list of mobile
L
Liangliang He 已提交
43 44 45 46
phones. The benchmark result can be found in the CI result page.

## Communication
* GitHub issues: bug reports, usage issues, feature requests
L
Liangliang He 已提交
47 48
* Gitter:
* QQ群: 756046893
L
Liangliang He 已提交
49 50 51 52 53

## Contributing
Any kind of contributions are welcome. For bug reports, feature requests,
please just open an issue without any hesitance. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
54
please refer to [the contribution guide](docs/development/contributing.md).
L
Liangliang He 已提交
55 56 57 58 59

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
L
Liangliang He 已提交
60
MiAI Compute Engine depends on several open source projects located in
L
Liangliang He 已提交
61 62
[third_party](mace/third_party) directory. Particularly, we learned a lot from
the following projects during the development:
L
Liangliang He 已提交
63
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
64 65 66 67 68 69 70 71 72 73
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
their helps.