提交 0bbb37f6 编写于 作者: L liuqi

Fix some typo and grammar error in documents.

上级 bf462779
......@@ -5,8 +5,7 @@ To run tests, you need to first cross compile the code, push the binary
into the device and then execute the binary. To automate this process,
MACE provides `tools/bazel_adb_run.py` tool.
You need to make sure your device is connected to you dev machine before
running tests.
You need to make sure your device has been connected to your dev pc before running tests.
Run unit tests
---------------
......
......@@ -2,7 +2,7 @@ Introduction
============
Mobile AI Compute Engine (MACE) is a deep learning inference framework optimized for
mobile heterogeneous computing platforms. MACE cover common mobile computing devices(CPU, GPU and DSP),
mobile heterogeneous computing platforms. MACE covers common mobile computing devices (CPU, GPU and DSP),
and supplies tools and document to help users to deploy neural network model to mobile devices.
MACE has been widely used in Xiaomi and proved with industry leading performance and stability.
......@@ -34,8 +34,8 @@ Runtime
CPU/GPU/DSP runtime correspond to the Ops for different devices.
Work Flow
---------
Workflow
--------
The following figure shows the basic work flow of MACE.
.. image:: mace-work-flow.png
......@@ -56,7 +56,7 @@ Build MACE dynamic or static libraries.
==================
3. Convert model
==================
Convert Tensorflow or Caffe model to MACE model.
Convert TensorFlow or Caffe model to MACE model.
===========
4.1. Deploy
......@@ -132,7 +132,7 @@ CPU/GPU/DSP Runtime对应于各个计算设备的算子实现。
==================
3. 转换模型
==================
将Tensorflow 或者 Caffe的模型转为MACE的模型.
将TensorFlow 或者 Caffe的模型转为MACE的模型.
==================================
4.1. 部署
......
......@@ -89,7 +89,7 @@ in one deployment file.
* - obfuscate
- [optional] Whether to obfuscate the model operator name, default to 0.
* - winograd
- [optional] Whether to enable Winograd convolution, **will increase memory consumption**.
- [optional] Which type winograd to use, could be [0, 2, 4]. 0 for disable winograd, 2 and 4 for enable winograd, 4 may be faster than 2 but may take more memory.
.. note::
......@@ -117,7 +117,7 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni
If you want to use this case, you can just use static mace library.
* **1. Change the model configuration file(.yml)**
* **1. Change the model deployment file(.yml)**
If you want to protect your model, you can convert model to CPP code. there are also two cases:
......@@ -137,7 +137,7 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni
.. note::
Another model protection method is using ``obfuscate`` to obfuscate the model operator name.
Another model protection method is using ``obfuscate`` to obfuscate names of model's operators.
* **2. Convert model(s) to code**
......@@ -146,19 +146,22 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni
python tools/converter.py convert --config=/path/to/model_deployment_file.yml
The command will generate **${library_name}.a** in **builds/${library_name}/model** directory and
** *.h ** in **builds/${library_name}/include** like below dir-tree.
** *.h ** in **builds/${library_name}/include** like the following dir-tree.
.. code::
builds
├── include
│   └── mace
│   └── public
│   ├── mace_engine_factory.h
│   └── mobilenet_v1.h
└── model
   ├── mobilenet-v1.a
   └── mobilenet_v1.data
# model_graph_format: code
# model_data_format: file
builds
├── include
│   └── mace
│   └── public
│   ├── mace_engine_factory.h
│   └── mobilenet_v1.h
└── model
   ├── mobilenet-v1.a
   └── mobilenet_v1.data
* **3. Deployment**
......@@ -196,12 +199,12 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni
* **Tuning for specific SOC's GPU**
If you want to use GPU of a specific device, you could specify ``target_socs`` and
tuning for the specific SOC. It may get 1~10% performance improvement.
If you want to use the GPU of a specific device, you can just specify the ``target_socs`` in your YAML file and
then tune the MACE lib for it, which may get 1~10% performance improvement.
* **1. Change the model configuration file(.yml)**
* **1. Change the model deployment file(.yml)**
Specific ``target_socs`` in your model configuration file(.yml):
Specify ``target_socs`` in your model deployment file(.yml):
.. code:: sh
......@@ -231,7 +234,7 @@ There are two common advanced use cases: 1. convert a model to CPP code. 2. tuni
python tools/converter.py run --config=/path/to/model_deployment_file.yml --validate
The command will generate two files in `builds/${library_name}/opencl`, like below.
The command will generate two files in `builds/${library_name}/opencl`, like the following dir-tree.
.. code::
......@@ -285,7 +288,7 @@ Useful Commands
# original model and framework, measured with cosine distance for similarity.
python tools/converter.py run --config=/path/to/model_deployment_file.yml --validate
# Check the memory usage of the model(**Just keep only one model in configuration file**)
# Check the memory usage of the model(**Just keep only one model in deployment file**)
python tools/converter.py run --config=/path/to/model_deployment_file.yml --round=10000 &
sleep 5
adb shell dumpsys meminfo | grep mace_run
......@@ -294,9 +297,9 @@ Useful Commands
.. warning::
``run`` rely on ``convert`` command, you should ``run`` after ``convert``.
``run`` rely on ``convert`` command, you should ``convert`` before ``run``.
* **benchmark and profiling model**
* **benchmark and profile model**
.. code:: sh
......
......@@ -63,6 +63,9 @@ Here we use the mobilenet-v2 model as an example.
.. code:: sh
# Run example
python tools/converter.py run --config=/path/to/mace-models/mobilenet-v2/mobilenet-v2.yml --example
# Test model run time
python tools/converter.py run --config=/path/to/mace-models/mobilenet-v2/mobilenet-v2.yml --round=100
......@@ -80,11 +83,11 @@ This part will show you how to use your pre-trained model in MACE.
1. Prepare your model
======================
Mace now supports models from Tensorflow and Caffe (more frameworks will be supported).
Mace now supports models from TensorFlow and Caffe (more frameworks will be supported).
- TensorFlow
Prepare your pre-trained Tensorflow model.pb file.
Prepare your pre-trained TensorFlow model.pb file.
Use `Graph Transform Tool <https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md>`__
to optimize your model for inference.
......@@ -135,10 +138,10 @@ When converting a model or building a library, MACE needs to read a YAML file wh
A model deployment file contains all the information of your model(s) and building options. There are several example
deployment files in *MACE Model Zoo* project.
The following shows two basic usage of deployment files for Tensorflow and Caffe models.
The following shows two basic usage of deployment files for TensorFlow and Caffe models.
Modify one of them and use it for your own case.
- Tensorflow
- TensorFlow
.. literalinclude:: models/demo_app_models_tf.yml
:language: yaml
......
......@@ -191,7 +191,7 @@ def parse_args():
"""Parses command line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
"--platform", type=str, default="", help="Tensorflow or Caffe.")
"--platform", type=str, default="", help="TensorFlow or Caffe.")
parser.add_argument(
"--model_file",
type=str,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册