diff --git a/README.md b/README.md index ce6a329b9805a6f2e933c96d9f88217aa78f540a..21d1b123abf1577f7a6be9fd59ab119d199a9247 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,15 @@ -# MiAI Compute Engine +# Mobile AI Compute Engine [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) [![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines) [Documentation](docs) | [FAQ](docs/faq.md) | [Release Notes](RELEASE.md) | -[MiAI Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) | -[Demo](mace/android) | +[MACE Model Zoo](https://github.com/XiaoMi/mace-models) | +[Demo](mace/examples/android) | [中文](README_zh.md) -**MiAI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for +**Mobile AI Compute Engine** (or **MACE** for short) is a deep learning inference framework optimized for mobile heterogeneous computing platforms. The design is focused on the following targets: * Performance @@ -43,7 +43,7 @@ targets: * [Create a model deployment file](docs/getting_started/create_a_model_deployment.rst) ## Performance -[MiAI Compute Engine Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) contains +[MACE Model Zoo](https://github.com/XiaoMi/mace-models) contains several common neural networks models and built daily against a list of mobile phones. The benchmark result can be found in the CI result page. @@ -63,7 +63,7 @@ please refer to [the contribution guide](docs/development/contributing.md). [Apache License 2.0](LICENSE). ## Acknowledgement -MiAI Compute Engine depends on several open source projects located in +MACE depends on several open source projects located in [third_party](third_party) directory. Particularly, we learned a lot from the following projects during the development: * [Qualcomm Hexagon NN Offload Framework](https://source.codeaurora.org/quic/hexagon_nn/nnlib): the Hexagon DSP runtime diff --git a/README_zh.md b/README_zh.md index 6096cb81ad4da9373e2e0dc7248afde055571d41..42613c1f36a962c3c3c8fd001b0aa9f317592034 100644 --- a/README_zh.md +++ b/README_zh.md @@ -1,15 +1,15 @@ -# MiAI计算引擎 +# MACE - 移动人工智能计算引擎 [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) [![build status](http://v9.git.n.xiaomi.com/deep-computing/mace/badges/master/build.svg)](http://v9.git.n.xiaomi.com/deep-computing/mace/pipelines) [文档](docs) | [FAQ](docs/faq.md) | [发布记录](RELEASE.md) | -[MiAI Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) | -[Demo](mace/android) | +[MACE Model Zoo](https://github.com/XiaoMi/mace-models) | +[Demo](mace/examples/android) | [English](README.md) -**MiAI Compute Engine** 是一个专为移动端异构计算平台优化的神经网络计算框架。 +**Mobile AI Compute Engine (MACE)** 是一个专为移动端异构计算平台优化的神经网络计算框架。 主要从以下的角度做了专门的优化: * 性能 * 代码经过NEON指令,OpenCL以及Hexagon HVX专门优化,并且采用 @@ -35,7 +35,7 @@ * [如何构建](docs/getting_started/how_to_build.rst) ## 性能评测 -[MiAI Model Zoo](http://v9.git.n.xiaomi.com/deep-computing/mace-models) +[MACE Model Zoo](https://github.com/XiaoMi/mace-models) 包含若干常用模型,并且会对一组手机进行每日构建。最新的性能评测结果可以从项目的持续集成页面获取。 ## 交流与反馈 diff --git a/docs/conf.py b/docs/conf.py index f5c8c019890dd2f42677177562c1bf511c7fa174..d69fded5d15632627e3621c483fce170000c8ad5 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -6,7 +6,7 @@ import recommonmark.parser import sphinx_rtd_theme -project = u'MiAI Compute Engine' +project = u'Mobile AI Compute Engine (MACE)' author = u'%s Developers' % project copyright = u'2018, %s' % author diff --git a/docs/getting_started/how_to_build.rst b/docs/getting_started/how_to_build.rst index c3e5772def63c022a7ababa5b026a9043a36bfe8..22b99f5a74700c1ff16d41e60ac8855784d4c582 100644 --- a/docs/getting_started/how_to_build.rst +++ b/docs/getting_started/how_to_build.rst @@ -19,7 +19,7 @@ Supported Platforms Environment Requirement ------------------------- -MiAI Compute Engine requires the following dependencies: +MACE requires the following dependencies: .. list-table:: :widths: auto @@ -67,7 +67,7 @@ MiAI Compute Engine requires the following dependencies: ``export ANDROID_NDK_HOME=/path/to/ndk`` to specify ANDROID_NDK_HOME -MiAI Compute Engine provides Dockerfile with these dependencies installed, +MACE provides Dockerfile with these dependencies installed, you can build the image from the Dockerfile, .. code:: sh @@ -95,7 +95,7 @@ Usage -------- ======================================= -1. Pull MiAI Compute Engine source code +1. Pull MACE source code ======================================= .. code:: sh @@ -166,7 +166,7 @@ optimizations for different runtimes, - Caffe -MiAI Compute Engine converter only supports Caffe 1.0+, you need to upgrade +MACE converter only supports Caffe 1.0+, you need to upgrade your models with Caffe built-in tool when necessary, .. code:: bash @@ -184,7 +184,7 @@ your models with Caffe built-in tool when necessary, ----------------- 3.1 Overview ----------------- -MiAI Compute Engine can build either static or shared library (which is +MACE can build either static or shared library (which is specified by ``linkshared`` in YAML model deployment file). The followings are two use cases. @@ -208,7 +208,7 @@ The followings are two use cases. There will be around of 1 ~ 10% performance drop for GPU runtime compared to the well tuned library. -MiAI Compute Engine provide command line tool (``tools/converter.py``) for +MACE provide command line tool (``tools/converter.py``) for model conversion, compiling, test run, benchmark and correctness validation. .. note:: diff --git a/docs/getting_started/introduction.rst b/docs/getting_started/introduction.rst index ad114fbfa6eecbb26cb11e9eb6891557d838031e..874eabd5da0cdd1ad2148d486e8f871e81fc9767 100644 --- a/docs/getting_started/introduction.rst +++ b/docs/getting_started/introduction.rst @@ -1,7 +1,7 @@ Introduction ============ -MiAI Compute Engine is a deep learning inference framework optimized for +Mobile AI Compute Engine (MACE) is a deep learning inference framework optimized for mobile heterogeneous computing platforms. The following figure shows the overall architecture. @@ -12,8 +12,8 @@ overall architecture. Model format ------------ -MiAI Compute Engine defines a customized model format which is similar to -Caffe2. The MiAI model can be converted from exported models by TensorFlow +MACE defines a customized model format which is similar to +Caffe2. The MACE model can be converted from exported models by TensorFlow and Caffe. A YAML file is used to describe the model deployment details. In the next chapter, there is a detailed guide showing how to create this YAML file. @@ -26,7 +26,7 @@ more frameworks will be supported in the future. Model loading ------------- -The MiAI model format contains two parts: the model graph definition and +The MACE model format contains two parts: the model graph definition and the model parameter tensors. The graph part utilizes Protocol Buffers for serialization. All the model parameter tensors are concatenated together into a continuous byte array, and we call this array tensor data in diff --git a/docs/index.rst b/docs/index.rst index a42b1655448dad6c622882f7359918b851a41576..5dd297637b2375df6087d5d51f8cc609111fa622 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -1,6 +1,6 @@ -MiAI Compute Engine Documentation -================================= -Welcome to MiAI Compute Engine documentation. +Mobile AI Compute Engine Documentation +====================================== +Welcome to Mobile AI Compute Engine documentation. The main documentation is organized into the following sections: