提交 bf462779 编写于 作者: L liuqi

Fix document typo.

上级 ebf7a2e1
How to run tests How to run tests
================= =================
To run a test, you need to first cross compile the code, push the binary To run tests, you need to first cross compile the code, push the binary
into the device and then execute the binary. To automate this process, into the device and then execute the binary. To automate this process,
MACE provides `tools/bazel_adb_run.py` tool. MACE provides `tools/bazel_adb_run.py` tool.
......
...@@ -17,7 +17,7 @@ Necessary Dependencies: ...@@ -17,7 +17,7 @@ Necessary Dependencies:
- `bazel installation guide <https://docs.bazel.build/versions/master/install.html>`__ - `bazel installation guide <https://docs.bazel.build/versions/master/install.html>`__
* - android-ndk * - android-ndk
- r15c/r16b - r15c/r16b
- `NDK installation guide <https://developer.android.com/ndk/guides/setup#install>`__ or refers to the docker file - `NDK installation guide <https://developer.android.com/ndk/guides/setup#install>`__
* - adb * - adb
- >= 1.0.32 - >= 1.0.32
- apt-get install android-tools-adb - apt-get install android-tools-adb
...@@ -42,9 +42,6 @@ Necessary Dependencies: ...@@ -42,9 +42,6 @@ Necessary Dependencies:
* - filelock * - filelock
- >= 3.0.0 - >= 3.0.0
- pip install -I filelock==3.0.0 - pip install -I filelock==3.0.0
* - docker (for caffe)
- >= 17.09.0-ce
- `docker installation guide <https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository>`__
.. note:: .. note::
...@@ -62,3 +59,6 @@ Optional Dependencies: ...@@ -62,3 +59,6 @@ Optional Dependencies:
* - tensorflow * - tensorflow
- >= 1.6.0 - >= 1.6.0
- pip install -I tensorflow==1.6.0 (if you use tensorflow model) - pip install -I tensorflow==1.6.0 (if you use tensorflow model)
* - docker (for caffe)
- >= 17.09.0-ce
- `docker installation guide <https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository>`__
...@@ -65,4 +65,4 @@ Install Optional Dependencies ...@@ -65,4 +65,4 @@ Install Optional Dependencies
.. code:: sh .. code:: sh
pip install -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com tensorflow==1.8.0 pip install -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com tensorflow==1.6.0
...@@ -3,7 +3,7 @@ Introduction ...@@ -3,7 +3,7 @@ Introduction
Mobile AI Compute Engine (MACE) is a deep learning inference framework optimized for Mobile AI Compute Engine (MACE) is a deep learning inference framework optimized for
mobile heterogeneous computing platforms. MACE cover common mobile computing devices(CPU, GPU and DSP), mobile heterogeneous computing platforms. MACE cover common mobile computing devices(CPU, GPU and DSP),
and supplies tools and document to help users to deploy NN model to mobile devices. MACE has been and supplies tools and document to help users to deploy neural network model to mobile devices.
MACE has been widely used in Xiaomi and proved with industry leading performance and stability. MACE has been widely used in Xiaomi and proved with industry leading performance and stability.
Framework Framework
...@@ -26,7 +26,7 @@ and Caffe. ...@@ -26,7 +26,7 @@ and Caffe.
MACE Interpreter MACE Interpreter
================ ================
Mace Interpreter mainly parse the NN graph and manage the tensors in the graph. Mace Interpreter mainly parses the NN graph and manages the tensors in the graph.
======= =======
Runtime Runtime
......
==============
Advanced usage Advanced usage
=============== ==============
This part contains the full usage of MACE. This part contains the full usage of MACE.
...@@ -41,7 +42,7 @@ in one deployment file. ...@@ -41,7 +42,7 @@ in one deployment file.
- Library name. - Library name.
* - target_abis * - target_abis
- The target ABI(s) to build, could be 'host', 'armeabi-v7a' or 'arm64-v8a'. - The target ABI(s) to build, could be 'host', 'armeabi-v7a' or 'arm64-v8a'.
If more than one ABIs will be used, seperate them by comas. If more than one ABIs will be used, separate them by commas.
* - target_socs * - target_socs
- [optional] Build for specific SoCs. - [optional] Build for specific SoCs.
* - model_graph_format * - model_graph_format
...@@ -49,12 +50,12 @@ in one deployment file. ...@@ -49,12 +50,12 @@ in one deployment file.
* - model_data_format * - model_data_format
- model data format, could be 'file' or 'code'. 'file' for converting model weight to data file(.data) and 'code' for converting model weight to c++ code. - model data format, could be 'file' or 'code'. 'file' for converting model weight to data file(.data) and 'code' for converting model weight to c++ code.
* - model_name * - model_name
- model name, should be unique if there are more than one models. - model name should be unique if there are more than one models.
**LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.** **LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.**
* - platform * - platform
- The source framework, tensorflow or caffe. - The source framework, tensorflow or caffe.
* - model_file_path * - model_file_path
- The path of your model file, can be local path or remote url. - The path of your model file which can be local path or remote URL.
* - model_sha256_checksum * - model_sha256_checksum
- The SHA256 checksum of the model file. - The SHA256 checksum of the model file.
* - weight_file_path * - weight_file_path
...@@ -108,7 +109,7 @@ in one deployment file. ...@@ -108,7 +109,7 @@ in one deployment file.
Advanced Usage Advanced Usage
============== ==============
There are two common advanced use cases: 1. convert model to CPP code. 2. tuning for specific SOC if use GPU. There are two common advanced use cases: 1. convert a model to CPP code. 2. tuning for specific SOC if use GPU.
* **Convert model(s) to CPP code** * **Convert model(s) to CPP code**
...@@ -120,14 +121,14 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning ...@@ -120,14 +121,14 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning
If you want to protect your model, you can convert model to CPP code. there are also two cases: If you want to protect your model, you can convert model to CPP code. there are also two cases:
* convert model graph to code and model weight to file with blow model configuration. * convert model graph to code and model weight to file with below model configuration.
.. code:: sh .. code:: sh
model_graph_format: code model_graph_format: code
model_data_format: file model_data_format: file
* convert both model graph and model weight to code with blow model configuration. * convert both model graph and model weight to code with below model configuration.
.. code:: sh .. code:: sh
...@@ -145,7 +146,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning ...@@ -145,7 +146,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning
python tools/converter.py convert --config=/path/to/model_deployment_file.yml python tools/converter.py convert --config=/path/to/model_deployment_file.yml
The command will generate **${library_name}.a** in **builds/${library_name}/model** directory and The command will generate **${library_name}.a** in **builds/${library_name}/model** directory and
** *.h ** in **builds/${library_name}/include** like blow dir-tree. ** *.h ** in **builds/${library_name}/include** like below dir-tree.
.. code:: .. code::
...@@ -230,7 +231,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning ...@@ -230,7 +231,7 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning
python tools/converter.py run --config=/path/to/model_deployment_file.yml --validate python tools/converter.py run --config=/path/to/model_deployment_file.yml --validate
The command will generate two files in `builds/${library_name}/opencl`, like blow. The command will generate two files in `builds/${library_name}/opencl`, like below.
.. code:: .. code::
...@@ -252,8 +253,8 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning ...@@ -252,8 +253,8 @@ There are two common advanced use cases: 1. convert model to CPP code. 2. tuning
for the SOC. for the SOC.
* **4. Deployment** * **4. Deployment**
* Change the names of files generated above for not collision and push them to **your own device' directory**. * Change the names of files generated above for not collision and push them to **your own device's directory**.
* Usage like the previous procedure, blow list the key steps different. * Use like the previous procedure, below lists the key steps differently.
.. code:: cpp .. code:: cpp
......
...@@ -185,7 +185,7 @@ The above command will generate dynamic library ``builds/lib/${ABI}/libmace.so`` ...@@ -185,7 +185,7 @@ The above command will generate dynamic library ``builds/lib/${ABI}/libmace.so``
.. warning:: .. warning::
1. Please verify that the target_abis param in the above command and your deployment file are the same. Please verify that the target_abis param in the above command and your deployment file are the same.
================== ==================
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册