README.md 3.4 KB
Newer Older
L
Liangliang He 已提交
1 2
# MiAI Compute Engine
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
L
Liangliang He 已提交
3
[![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines)
L
liuqi 已提交
4

L
Liangliang He 已提交
5 6 7 8
[Documentation](docs) |
[FAQ](docs/faq.md) |
[Release Notes](RELEASE.md) |
[MiAI Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) |
L
Liangliang He 已提交
9 10
[Demo](mace/android) |
[中文](README_zh.md)
L
liuqi 已提交
11

L
Liangliang He 已提交
12 13 14 15
**MiAI Compute Engine** is a deep learning inference framework optimized for
mobile heterogeneous computing platforms. The design is focused on the following
targets:
* Performance
L
Liangliang He 已提交
16 17 18 19
  * The runtime is highly optimized with NEON, OpenCL and Hexagon, and
    [Winograd algorithm](https://arxiv.org/abs/1509.09308) is introduced to
    speed up the convolution operations. Except for the inference speed, the
    initialization speed is also intensively optimized.
L
Liangliang He 已提交
20
* Power consumption
L
Liangliang He 已提交
21 22
  * Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are
    included as advanced API.
L
Liangliang He 已提交
23 24
* Memory usage and library footprint
  * Graph level memory allocation optimization and buffer reuse is supported.
L
Liangliang He 已提交
25 26
    The core library tries to keep minium external dependencies to keep the
    library footprint small.
L
Liangliang He 已提交
27 28 29 30 31 32 33 34
* Model protection
  * Model protection is one the highest priority feature from the beginning of 
    the design. Various techniques are introduced like coverting models to C++
    code and literal obfuscations.
* Platform coverage
  * A good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based
    chips. CPU runtime is also compitable with most POSIX systems and
    archetectures with limited performance.
L
liuqi 已提交
35

L
Liangliang He 已提交
36
## Getting Started
L
Liangliang He 已提交
37 38 39
* [Introduction](docs/getting_started/introduction.rst)
* [How to build](docs/getting_started/how_to_build.rst)
* [Create a model deployment file](docs/getting_started/create_a_model_deployment.rst)
40

L
Liangliang He 已提交
41
## Performance
L
Liangliang He 已提交
42 43
[MiAI Compute Engine Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) contains
several common neural networks models and built daily against a list of mobile
L
Liangliang He 已提交
44 45 46 47
phones. The benchmark result can be found in the CI result page.

## Communication
* GitHub issues: bug reports, usage issues, feature requests
L
Liangliang He 已提交
48 49
* Gitter:
* QQ群: 756046893
L
Liangliang He 已提交
50 51 52 53 54

## Contributing
Any kind of contributions are welcome. For bug reports, feature requests,
please just open an issue without any hesitance. For code contributions, it's
strongly suggested to open an issue for discussion first. For more details,
L
Liangliang He 已提交
55
please refer to [the contribution guide](docs/development/contributing.md).
L
Liangliang He 已提交
56 57 58 59 60

## License
[Apache License 2.0](LICENSE).

## Acknowledgement
L
Liangliang He 已提交
61
MiAI Compute Engine depends on several open source projects located in
L
Liangliang He 已提交
62 63
[third_party](mace/third_party) directory. Particularly, we learned a lot from
the following projects during the development:
L
Liangliang He 已提交
64
* [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime
L
Liangliang He 已提交
65 66 67 68 69 70 71 72 73 74
  depends on this library.
* [TensorFlow](https://github.com/tensorflow/tensorflow),
  [Caffe](https://github.com/BVLC/caffe),
  [SNPE](https://developer.qualcomm.com/software/snapdragon-neural-processing-engine-ai),
  [ARM ComputeLibrary](https://github.com/ARM-software/ComputeLibrary),
  [ncnn](https://github.com/Tencent/ncnn) and many others: we learned many best
  practices from these projects.

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for
their helps.