提交 6a231fdb 编写于 作者: L liyin 提交者: 叶剑武

Add CMake and ZH docs

上级 021e1e25
中文
============
简介
----
Mobile AI Compute Engine (MACE) 是一个专为移动端异构计算设备优化的深度学习前向预测框架。
MACE覆盖了常见的移动端计算设备(CPU、GPU、Hexagon DSP、Hexagon HTA、MTK APU),并且提供了完整的工具链和文档,用户借助MACE能够
很方便地在移动端部署深度学习模型。MACE已经在小米内部广泛使用并且被充分验证具有业界领先的性能和稳定性。
框架
----
下图描述了MACE的基本框架。
.. image:: mace-arch.png
:scale: 40 %
:align: center
.. toctree::
:maxdepth: 1
:caption: 安装方法
:name: sec-install-ch
zh/installation/env_requirement
.. toctree::
:maxdepth: 1
:caption: 基本使用
:name: sec-user-ch
zh/user_guide/basic_usage
\ No newline at end of file
......@@ -34,13 +34,7 @@ It is usually used to measure classification accuracy. The higher the better.
where :math:`X` is expected output (from training platform) whereas :math:`X'` is actual output (from MACE) .
You can validate it by specifying `--validate` while running the model.
.. code:: sh
# Validate the correctness by comparing the results against the
# original model and framework
python tools/converter.py run --config=/path/to/your/model_deployment_file.yml --validate
You can validate it by specifying `--validate` option while running the model.
MACE automatically validate these metrics by running models with synthetic inputs.
If you want to specify input data to use, you can add an option in yaml config under 'subgraphs', e.g.,
......@@ -84,6 +78,10 @@ When MACE crashes, a complete stacktrace is useful in debugging. But because of
symbols table is not loaded in memory, which leading to no symbol in stack trace.
To circumvent this problem, you can rebuild `mace_run` with `--debug_mode` option to reserve debug symbols, e.g.,
For CMake users, modify the cmake configuration located in ``tools/cmake/`` as ``-DCMAKE_BUILD_TYPE=Debug``, and rebuild the engine.
For Bazel users,
.. code:: sh
python tools/converter.py run --config=/path/to/config.yml --debug_mode
......@@ -131,11 +129,7 @@ The threshold can be configured through environment variable, e.g. ``MACE_CPP_MI
With VLOG, the lower the verbose level, the more likely messages are to be logged. For example, when the threshold is set
to 2, both ``VLOG(1)``, ``VLOG(2)`` log messages will be printed, but ``VLOG(3)`` and highers won't.
By using ``mace_run`` tool, VLOG level can be easily set by option, e.g.,
.. code:: sh
python tools/converter.py run --config /path/to/model.yml --vlog_level=2
By using ``mace_run`` tool, VLOG level can be easily set by option, e.g., ``--vlog_level=2``
If models are run on android, you might need to use ``adb logcat`` to view logs.
......
......@@ -14,6 +14,15 @@ MACE use [gtest](https://github.com/google/googletest) for unit tests.
* Run all unit tests defined in a Bazel target, for example, run `mace_cc_test`:
For CMake users:
```sh
python tools/python/run_target.py \
--target_abi=armeabi-v7a --target_socs=all --target_name=mace_cc_test
```
For Bazel users:
```sh
python tools/bazel_adb_run.py --target="//test/ccunit:mace_cc_test" \
--run_target=True
......@@ -22,6 +31,16 @@ MACE use [gtest](https://github.com/google/googletest) for unit tests.
* Run unit tests with [gtest](https://github.com/google/googletest) filter,
for example, run `Conv2dOpTest` unit tests:
For CMake users:
```sh
python tools/python/run_target.py \
--target_abi=armeabi-v7a --target_socs=all --target_name=mace_cc_test \
--gtest_filter=Conv2dOpTest*
```
For Bazel users:
```sh
python tools/bazel_adb_run.py --target="//test/ccunit:mace_cc_test" \
--run_target=True \
......@@ -36,6 +55,15 @@ MACE provides a micro benchmark framework for performance tuning.
* Run all micro benchmarks defined in a Bazel target, for example, run all
`mace_cc_benchmark` micro benchmarks:
For CMake users:
```sh
python tools/python/run_target.py \
--target_abi=armeabi-v7a --target_socs=all --target_name=mace_cc_benchmark
```
For Bazel users:
```sh
python tools/bazel_adb_run.py --target="//test/ccbenchmark:mace_cc_benchmark" \
--run_target=True
......@@ -44,8 +72,18 @@ MACE provides a micro benchmark framework for performance tuning.
* Run micro benchmarks with regex filter, for example, run all `CONV_2D` GPU
micro benchmarks:
For CMake users:
```sh
python tools/python/run_target.py \
--target_abi=armeabi-v7a --target_socs=all --target_name=mace_cc_benchmark \
--filter=MACE_BM_CONV_2D_.*_GPU
```
For Bazel users:
```sh
python tools/bazel_adb_run.py --target="//test/ccbenchmark:mace_cc_benchmark" \
--run_target=True \
--args="--filter=MACE_BM_CONV_2D_.*_GPU"
```
```
\ No newline at end of file
......@@ -25,7 +25,9 @@ The main documentation is organized into the following sections:
:caption: User guide
:name: sec-user
user_guide/basic_usage_cmake
user_guide/basic_usage
user_guide/advanced_usage_cmake
user_guide/advanced_usage
user_guide/benchmark
user_guide/op_lists
......@@ -50,3 +52,13 @@ The main documentation is organized into the following sections:
:name: sec-faq
faq
.. toctree::
:maxdepth: 1
:caption: 中文
:name: sec-ch
chinese
......@@ -15,9 +15,6 @@ Required dependencies
* - Python
-
- 2.7 or 3.6
* - Bazel
- `bazel installation guide <https://docs.bazel.build/versions/master/install.html>`__
- 0.13.0
* - CMake
- Linux:``apt-get install cmake`` Mac:``brew install cmake``
- >= 3.11.3
......@@ -57,9 +54,9 @@ Optional dependencies
* - Android NDK
- `NDK installation guide <https://developer.android.com/ndk/guides/setup#install>`__
- Required by Android build, r15b, r15c, r16b, r17b
* - CMake
- apt-get install cmake
- >= 3.11.3
* - Bazel
- `bazel installation guide <https://docs.bazel.build/versions/master/install.html>`__
- 0.13.0
* - ADB
- Linux:``apt-get install android-tools-adb`` Mac:``brew cask install android-platform-tools``
- Required by Android run, >= 1.0.32
......
......@@ -63,71 +63,3 @@ and validate model correctness against original TensorFlow or Caffe results.
4.3. Benchmark
~~~~~~~~~~~~~~~~~~
MACE provides benchmark tool to get the Op level profiling result of the model.
简介
----
Mobile AI Compute Engine (MACE) 是一个专为移动端异构计算设备优化的深度学习前向预测框架。
MACE覆盖了常见的移动端计算设备(CPU,GPU和DSP),并且提供了完整的工具链和文档,用户借助MACE能够
很方便地在移动端部署深度学习模型。MACE已经在小米内部广泛使用并且被充分验证具有业界领先的性能和稳定性。
框架
----
下图描述了MACE的基本框架。
.. image:: mace-arch.png
:scale: 40 %
:align: center
MACE Model
~~~~~~~~~~~~~~~~~~
MACE定义了自有的模型格式(类似于Caffe2),通过MACE提供的工具可以将Caffe/TensorFlow/ONNX格式的模型
转为MACE模型。
MACE Interpreter
~~~~~~~~~~~~~~~~~~
MACE Interpreter主要负责解析运行神经网络图(DAG)并管理网络中的Tensors。
Runtime
~~~~~~~~~~~~~~~~~~
CPU/GPU/DSP Runtime对应于各个计算设备的算子实现。
使用流程
------------
下图描述了MACE使用的基本流程。
.. image:: mace-work-flow-zh.png
:scale: 60 %
:align: center
1. 配置模型部署文件(.yml)
~~~~~~~~~~~~~~~~~~~~~~~~~~
模型部署文件详细描述了需要部署的模型以及生成库的信息,MACE根据该文件最终生成对应的库文件。
2. 编译MACE库
~~~~~~~~~~~~~~~~~~
编译MACE的静态库或者动态库。
3. 转换模型
~~~~~~~~~~~~~~~~~~
将TensorFlow或者Caffe或者ONNX的模型转为MACE的模型。
4.1. 部署
~~~~~~~~~~~~~~~~~~
根据不同使用目的集成Build阶段生成的库文件,然后调用MACE相应的接口执行模型。
4.2. 命令行运行
~~~~~~~~~~~~~~~~~~
MACE提供了命令行工具,可以在命令行运行模型,可以用来测试模型运行时间,内存占用和正确性。
4.3. Benchmark
~~~~~~~~~~~~~~~~~~
MACE提供了命令行benchmark工具,可以细粒度的查看模型中所涉及的所有算子的运行时间。
Advanced usage
===============
Advanced usage for Bazel users
================================
This part contains the full usage of MACE.
......
Advanced usage for CMake users
===============================
This part contains the full usage of MACE.
Deployment file
---------------
There are many advanced options supported.
* **Example**
Here is an example deployment file with two models.
.. literalinclude:: models/demo_models_cmake.yml
:language: yaml
* **Configurations**
.. list-table::
:header-rows: 1
* - Options
- Usage
* - model_name
- model name should be unique if there are more than one models.
**LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.**
* - platform
- The source framework, tensorflow or caffe.
* - model_file_path
- The path of your model file which can be local path or remote URL.
* - model_sha256_checksum
- The SHA256 checksum of the model file.
* - weight_file_path
- [optional] The path of Caffe model weights file.
* - weight_sha256_checksum
- [optional] The SHA256 checksum of Caffe model weights file.
* - subgraphs
- subgraphs key. **DO NOT EDIT**
* - input_tensors
- The input tensor name(s) (tensorflow) or top name(s) of inputs' layer (caffe).
If there are more than one tensors, use one line for a tensor.
* - output_tensors
- The output tensor name(s) (tensorflow) or top name(s) of outputs' layer (caffe).
If there are more than one tensors, use one line for a tensor.
* - input_shapes
- The shapes of the input tensors, default is NHWC order.
* - output_shapes
- The shapes of the output tensors, default is NHWC order.
* - input_ranges
- The numerical range of the input tensors' data, default [-1, 1]. It is only for test.
* - validation_inputs_data
- [optional] Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used.
* - accuracy_validation_script
- [optional] Specify the accuracy validation script as a plugin to test accuracy, see `doc <#validate-accuracy-of-mace-model>`__.
* - validation_threshold
- [optional] Specify the similarity threshold for validation. A dict with key in 'CPU', 'GPU' and/or 'HEXAGON' and value <= 1.0.
* - backend
- The onnx backend framework for validation, could be [tensorflow, caffe2, pytorch], default is tensorflow.
* - runtime
- The running device, one of [cpu, gpu, dsp, cpu+gpu]. cpu+gpu contains CPU and GPU model definition so you can run the model on both CPU and GPU.
* - data_type
- [optional] The data type used for specified runtime. [fp16_fp32, fp32_fp32] for GPU, default is fp16_fp32, [fp32] for CPU and [uint8] for DSP.
* - input_data_types
- [optional] The input data type for specific op(eg. gather), which can be [int32, float32], default to float32.
* - input_data_formats
- [optional] The format of the input tensors, one of [NONE, NHWC, NCHW]. If there is no format of the input, please use NONE. If only one single format is specified, all inputs will use that format, default is NHWC order.
* - output_data_formats
- [optional] The format of the output tensors, one of [NONE, NHWC, NCHW]. If there is no format of the output, please use NONE. If only one single format is specified, all inputs will use that format, default is NHWC order.
* - limit_opencl_kernel_time
- [optional] Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default is 0.
* - opencl_queue_window_size
- [optional] Limit the max commands in OpenCL command queue to keep UI responsiveness, default is 0.
* - obfuscate
- [optional] Whether to obfuscate the model operator name, default to 0.
* - winograd
- [optional] Which type winograd to use, could be [0, 2, 4]. 0 for disable winograd, 2 and 4 for enable winograd, 4 may be faster than 2 but may take more memory.
.. note::
Some command tools:
.. code:: bash
# Get device's soc info.
adb shell getprop | grep platform
# command for generating sha256_sum
sha256sum /path/to/your/file
Advanced usage
--------------
There are three common advanced use cases:
- run your model on the embedded device(ARM LINUX)
- converting model to C++ code.
- tuning GPU kernels for a specific SoC.
Run you model on the embedded device(ARM Linux)
-----------------------------------------------
The way to run your model on the ARM Linux is nearly same as with android, except you need specify a device config file.
.. code:: bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --validate --device_yml=/path/to/devices.yml
There are two steps to do before run:
1. configure login without password
MACE use ssh to connect embedded device, you should copy your public key to embedded device with the blow command.
.. code:: bash
cat ~/.ssh/id_rsa.pub | ssh -q {user}@{ip} "cat >> ~/.ssh/authorized_keys"
2. write your own device yaml configuration file.
* **Example**
Here is an device yaml config demo.
.. literalinclude:: devices/demo_device_nanopi.yml
:language: yaml
* **Configuration**
The detailed explanation is listed in the blow table.
.. list-table::
:header-rows: 1
* - Options
- Usage
* - target_abis
- Device supported abis, you can get it via ``dpkg --print-architecture`` and
``dpkg --print-foreign-architectures`` command, if more than one abi is supported,
separate them by commas.
* - target_socs
- device soc, you can get it from device manual, we haven't found a way to get it in shell.
* - models
- device models full name, you can get via get ``lshw`` command (third party package, install it via your package manager).
see it's product value.
* - address
- Since we use ssh to connect device, ip address is required.
* - username
- login username, required.
Model Protection
--------------------------------
Model can be encrypted by obfuscation.
.. code:: bash
python tools/python/encrypt.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml
It will override ``mobilenet_v1.pb`` and ``mobilenet_v1.data``.
If you want to compiled the model into a library, you should use options ``--gencode_model --gencode_param`` to generate model code, i.e.,
.. code:: bash
python tools/python/encrypt.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --gencode_model --gencode_param
It will generate model code into ``mace/codegen/models`` and also generate a helper function ``CreateMaceEngineFromCode`` in ``mace/codegen/engine/mace_engine_factory.h`` by which you can create an engine with models built in it.
After that you can rebuild the engine.
.. code:: bash
RUNTIME=GPU RUNMODE=code bash tools/cmake/cmake-build-armeabi-v7a.sh
``RUNMODE=code`` means you compile and link model library with MACE engine.
When you test the model in code format, you should specify it in the script as follows.
.. code:: bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --gencode_model --gencode_param
Of course you can generate model code only, and use parameter file.
When you need to integrate the libraries into your applications, you can link `libmace_static.a` and `libmodel.a` to your target. These are under the directory:
``build/cmake-build/armeabi-v7a/install/lib/``, the header files you need are under ``build/cmake-build/armeabi-v7a/install/include``.
Refer to \ ``mace/tools/mace_run.cc``\ for full usage. The following list the key steps.
.. code:: cpp
// Include the headers
#include "mace/public/mace.h"
// If the model_graph_format is code
#include "mace/public/${model_name}.h"
#include "mace/public/mace_engine_factory.h"
// ... Same with the code in basic usage
// 4. Create MaceEngine instance
std::shared_ptr<mace::MaceEngine> engine;
MaceStatus create_engine_status;
// Create Engine from compiled code
create_engine_status =
CreateMaceEngineFromCode(model_name.c_str(),
model_data_ptr, // nullptr if model_data_format is code
model_data_size, // 0 if model_data_format is code
input_names,
output_names,
device_type,
&engine);
if (create_engine_status != MaceStatus::MACE_SUCCESS) {
// Report error or fallback
}
// ... Same with the code in basic usage
Tuning for specific SoC's GPU
---------------------------------
If you want to use the GPU of a specific device, you can tune the performance for particular devices, which may get 1~10% performance improvement.
You can specify `--tune` option when you want to run and tune the performance at the same time.
.. code:: bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --tune
It will generate OpenCL tuned parameter binary file in `build/mobilenet_v1/opencl` directory.
.. code:: bash
└── mobilenet_v1_tuned_opencl_parameter.MIX2S.sdm845.bin
It specifies your test platform model and SoC. You can use it in production to reduce latency on GPU.
To deploy it, change the names of files generated above for not collision and push them to **your own device's directory**.
Use like the previous procedure, below lists the key steps differently.
.. code:: cpp
// Include the headers
#include "mace/public/mace.h"
// 0. Declare the device type (must be same with ``runtime`` in configuration file)
DeviceType device_type = DeviceType::GPU;
// 1. configuration
MaceStatus status;
MaceEngineConfig config(device_type);
std::shared_ptr<GPUContext> gpu_context;
const std::string storage_path ="path/to/storage";
gpu_context = GPUContextBuilder()
.SetStoragePath(storage_path)
.SetOpenCLBinaryPaths(path/to/opencl_binary_paths)
.SetOpenCLParameterPath(path/to/opencl_parameter_file)
.Finalize();
config.SetGPUContext(gpu_context);
config.SetGPUHints(
static_cast<GPUPerfHint>(GPUPerfHint::PERF_NORMAL),
static_cast<GPUPriorityHint>(GPUPriorityHint::PRIORITY_LOW));
// ... Same with the code in basic usage.
Multi Model Support (optional)
--------------------------------
If multiple models are configured in config file. After you test it, it will generate more than one tuned parameter files.
Then you need to merge them together.
.. code:: bash
python tools/python/gen_opencl.py
After that, it will generate one set of files into `build/opencl` directory.
.. code:: bash
├── compiled_opencl_kernel.bin
└── tuned_opencl_parameter.bin
You can also generate code into the engine by specify ``--gencode``, after which you should rebuild the engine.
Validate accuracy of MACE model
-------------------------------
MACE supports **python validation script** as a plugin to test the accuracy, the plugin script could be used for below two purpose.
1. Test the **accuracy(like Top-1)** of MACE model(specifically quantization model) converted from other framework(like tensorflow)
2. Show some real output if you want to see it.
The script define some interfaces like `preprocess` and `postprocess` to deal with input/outut and calculate the accuracy,
you could refer to the `sample code <https://github.com/XiaoMi/mace/tree/master/tools/accuracy_validator.py>`__ for detail.
the sample code show how to calculate the Top-1 accuracy with imagenet validation dataset.
Reduce Library Size
-------------------
Remove the registration of the ops unused for your models in the ``mace/ops/ops_register.cc``,
which will reduce the library size significantly. the final binary just link the registered ops' code.
.. code:: cpp
#include "mace/ops/ops_register.h"
namespace mace {
namespace ops {
// Just leave the ops used in your models
...
} // namespace ops
OpRegistry::OpRegistry() : OpRegistryBase() {
// Just leave the ops used in your models
...
ops::RegisterMyCustomOp(this);
...
}
} // namespace mace
Reduce Model Size
-------------------
Model file size can be a bottleneck for the deployment of neural networks on mobile devices,
so MACE provides several ways to reduce the model size with no or little performance or accuracy degradation.
**1. Save model weights in half-precision floating point format**
The default data type of a regular model is float (32bit). To reduce the model weights size,
half (16bit) can be used to reduce it by half with negligible accuracy degradation.
For CPU, ``data_type`` can be specified as ``fp16_fp32`` in the deployment file to save the weights in half and actual inference in float.
For GPU, ``fp16_fp32`` is default. The ops in GPU take half as inputs and outputs while kernel execution in float.
**2. Save model weights in quantized fixed point format**
Weights of convolutional (excluding depthwise) and fully connected layers take up a major part of model size.
These weights can be quantized to 8bit to reduce the size to a quarter, whereas the accuracy usually decreases only by 1%-3%.
For example, the top-1 accuracy of MobileNetV1 after quantization of weights is 68.2% on the ImageNet validation set.
``quantize_large_weights`` can be specified as 1 in the deployment file to save these weights in 8bit and actual inference in float.
It can be used for both CPU and GPU.
Basic usage
============
Basic usage for Bazel users
============================
Build and run an example model
......
Basic usage for CMake users
=============================
First of all, make sure the environment has been set up correctly already (refer to :doc:`../installation/env_requirement`).
Clear Workspace
-------------------------------
Before you do anything, clear the workspace used by build and test process.
.. code:: sh
tools/clear_workspace.sh
Build Engine
-------------------------------
Please make sure you have CMake installed.
.. code:: sh
RUNTIME=GPU bash tools/cmake/cmake-build-armeabi-v7a.sh
which generate libraries in ``build/cmake-build/armeabi-v7a``, you can use either static libraries or the ``libmace.so`` shared library.
You can also build for other target abis: ``arm64-v8a``, ``arm-linux-gnueabihf``, ``aarch64-linux-gnu``, ``host``;
and runtime: ``GPU``, ``HEXAGON``, ``HTA``, ``APU``.
Model Conversion
-------------------------------
When you have prepared your model, the first thing to do is write a model config in YAML format.
.. code:: yaml
models:
mobilenet_v1:
platform: tensorflow
model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v1/mobilenet-v1-1.0.pb
model_sha256_checksum: 71b10f540ece33c49a7b51f5d4095fc9bd78ce46ebf0300487b2ee23d71294e6
subgraphs:
- input_tensors:
- input
input_shapes:
- 1,224,224,3
output_tensors:
- MobilenetV1/Predictions/Reshape_1
output_shapes:
- 1,1001
runtime: gpu
The following steps generate output to ``build`` directory which is the default build and test workspace.
Suppose you have the model config in ``../mace-models/mobilenet-v1/mobilenet-v1.yml``. Then run
.. code:: yaml
python tools/python/convert.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml
which generate 4 files in ``build/mobilenet_v1/model/``
.. code:: sh
├── mobilenet_v1.pb (model file)
├── mobilenet_v1.data (param file)
├── mobilenet_v1_index.html (visualization page, you can open it in browser)
└── mobilenet_v1.pb_txt (model text file, which can be for debug use)
MACE also supports other platform: ``caffe``, ``onnx``.
Beyond GPU, users can specify ``cpu``, ``dsp`` to run on other target devices.
Model Test and Benchmark
-------------------------------
We provide simple tools to test and benchmark your model.
After model is converted, simply run
.. code:: sh
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --validate
Or benchmark the model
.. code:: sh
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --benchmark
It will test your model on the device configured in the model config (``runtime``).
You can also test on other device by specify ``--runtime=cpu (dsp/hta/apu)`` when you run test if you previously build engine for the device.
The log will be shown if ``--vlog_level=2`` is specified.
Deploy your model into applications
--------------------------------------
Please refer to \ ``mace/tools/mace_run.cc``\ for full usage. The following list the key steps.
.. code:: cpp
// Include the headers
#include "mace/public/mace.h"
// 0. Declare the device type (must be same with ``runtime`` in configuration file)
DeviceType device_type = DeviceType::GPU;
// 1. configuration
MaceStatus status;
MaceEngineConfig config(device_type);
std::shared_ptr<GPUContext> gpu_context;
// Set the path to store compiled OpenCL kernel binaries.
// please make sure your application have read/write rights of the directory.
// this is used to reduce the initialization time since the compiling is too slow.
// It's suggested to set this even when pre-compiled OpenCL program file is provided
// because the OpenCL version upgrade may also leads to kernel recompilations.
const std::string storage_path ="path/to/storage";
gpu_context = GPUContextBuilder()
.SetStoragePath(storage_path)
.Finalize();
config.SetGPUContext(gpu_context);
config.SetGPUHints(
static_cast<GPUPerfHint>(GPUPerfHint::PERF_NORMAL),
static_cast<GPUPriorityHint>(GPUPriorityHint::PRIORITY_LOW));
// 2. Define the input and output tensor names.
std::vector<std::string> input_names = {...};
std::vector<std::string> output_names = {...};
// 3. Create MaceEngine instance
std::shared_ptr<mace::MaceEngine> engine;
MaceStatus create_engine_status;
// Create Engine from model file
create_engine_status =
CreateMaceEngineFromProto(model_graph_proto,
model_graph_proto_size,
model_weights_data,
model_weights_data_size,
input_names,
output_names,
device_type,
&engine);
if (create_engine_status != MaceStatus::MACE_SUCCESS) {
// fall back to other strategy.
}
// 4. Create Input and Output tensor buffers
std::map<std::string, mace::MaceTensor> inputs;
std::map<std::string, mace::MaceTensor> outputs;
for (size_t i = 0; i < input_count; ++i) {
// Allocate input and output
int64_t input_size =
std::accumulate(input_shapes[i].begin(), input_shapes[i].end(), 1,
std::multiplies<int64_t>());
auto buffer_in = std::shared_ptr<float>(new float[input_size],
std::default_delete<float[]>());
// Load input here
// ...
inputs[input_names[i]] = mace::MaceTensor(input_shapes[i], buffer_in);
}
for (size_t i = 0; i < output_count; ++i) {
int64_t output_size =
std::accumulate(output_shapes[i].begin(), output_shapes[i].end(), 1,
std::multiplies<int64_t>());
auto buffer_out = std::shared_ptr<float>(new float[output_size],
std::default_delete<float[]>());
outputs[output_names[i]] = mace::MaceTensor(output_shapes[i], buffer_out);
}
// 5. Run the model
MaceStatus status = engine.Run(inputs, &outputs);
More details are in :doc:`advanced_usage`.
......@@ -18,9 +18,20 @@ Operator Benchmark is used for test and optimize the performance of specific ope
Usage
=====
For CMake users:
.. code:: bash
python tools/python/run_target.py \
--target_abi=armeabi-v7a --target_socs=all --target_name=mace_cc_benchmark \
--filter=.*BM_CONV.*
or for Bazel users:
.. code:: bash
python tools/bazel_adb_run.py --target="//test/ccbenchmark:mace_cc_benchmark" --run_target=True --args="--filter=.*BM_CONV.*"
python tools/bazel_adb_run.py --target="//test/ccbenchmark:mace_cc_benchmark" \
--run_target=True --args="--filter=.*BM_CONV.*"
======
Output
......@@ -66,9 +77,18 @@ This tool could record the running time of the model and the detailed running in
Usage
=====
For CMake users:
.. code:: bash
python tools/python/run_model.py --config=/path/to/your/model_deployment.yml --benchmark
or for Bazel users:
.. code:: bash
python tools/converter.py run --config=/path/to/your/model_deployment.yml --benchmark
python tools/python/converter.py run --config=/path/to/your/model_deployment.yml --benchmark
======
Output
......
models:
mobilenet_v1:
platform: tensorflow
model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v1/mobilenet-v1-1.0.pb
model_sha256_checksum: 71b10f540ece33c49a7b51f5d4095fc9bd78ce46ebf0300487b2ee23d71294e6
subgraphs:
- input_tensors:
- input
input_shapes:
- 1,224,224,3
output_tensors:
- MobilenetV1/Predictions/Reshape_1
output_shapes:
- 1,1001
validation_inputs_data:
- https://cnbj1.fds.api.xiaomi.com/mace/inputs/dog.npy
runtime: cpu+gpu
limit_opencl_kernel_time: 0
obfuscate: 0
winograd: 0
squeezenet_v11:
platform: caffe
model_file_path: http://cnbj1-inner-fds.api.xiaomi.net/mace/mace-models/squeezenet/SqueezeNet_v1.1/model.prototxt
weight_file_path: http://cnbj1-inner-fds.api.xiaomi.net/mace/mace-models/squeezenet/SqueezeNet_v1.1/weight.caffemodel
model_sha256_checksum: 625c952063da1569e22d2f499dc454952244d42cd8feca61f05502566e70ae1c
weight_sha256_checksum: 72b912ace512e8621f8ff168a7d72af55910d3c7c9445af8dfbff4c2ee960142
subgraphs:
- input_tensors:
- data
input_shapes:
- 1,227,227,3
output_tensors:
- prob
output_shapes:
- 1,1,1,1000
accuracy_validation_script:
- path/to/your/script
runtime: cpu+gpu
limit_opencl_kernel_time: 0
obfuscate: 0
winograd: 0
......@@ -30,12 +30,7 @@ This method requires developer to calculate tensor range of each activation laye
MACE provides tools to do statistics with following steps:
1. Convert original model to run on CPU host without obfuscation (by setting `target_abis` to `host`, `runtime` to `cpu`,
and `obfuscate` to `0`, appending `:0` to `output_tensors` if missing in yaml config). E.g.,
.. code:: sh
python tools/converter.py convert --config ../mace-models/inception-v3/inception-v3.yml
and `obfuscate` to `0`, appending `:0` to `output_tensors` if missing in yaml config).
2. Log tensor range of each activation layer by inferring several samples on CPU host. Sample inputs should be
representative to calculate the ranges of each layer properly.
......@@ -52,6 +47,11 @@ MACE provides tools to do statistics with following steps:
rename 's/^/input/' *
# Run with input tensors
# For CMake users:
python tools/python/run_model.py --config ../mace-models/inception-v3/inception-v3.yml
--quantize_stat --input_dir /path/to/directory/of/input/tensors > range_log
# For Bazel users:
python tools/converter.py run --config ../mace-models/inception-v3/inception-v3.yml
--quantize_stat --input_dir /path/to/directory/of/input/tensors > range_log
......
安装环境
========================
MACE 需要安装下列依赖:
必须依赖
---------------------
.. list-table::
:header-rows: 1
* - 软件
- 安装命令
- Python 版本
* - Python
-
- 2.7 or 3.6
* - CMake
- Linux:``apt-get install cmake`` Mac:``brew install cmake``
- >= 3.11.3
* - Jinja2
- pip install jinja2==2.10
- 2.10
* - PyYaml
- pip install pyyaml==3.12
- 3.12.0
* - sh
- pip install sh==1.12.14
- 1.12.14
* - Numpy
- pip install numpy==1.14.0
- 仅测试使用
可选依赖
---------------------
.. list-table::
:header-rows: 1
* - 软件
- 安装命令
- 版本和说明
* - Android NDK
- `NDK 安装指南 <https://developer.android.com/ndk/guides/setup#install>`__
- Required by Android build, r15b, r15c, r16b, r17b
* - CMake
- apt-get install cmake
- >= 3.11.3
* - ADB
- Linux:``apt-get install android-tools-adb`` Mac:``brew cask install android-platform-tools``
- Android 运行需要, >= 1.0.32
* - TensorFlow
- pip install tensorflow==1.8.0
- Tensorflow 模型转换需要
* - Docker
- `docker 安装指南 <https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository>`__
- docker 模式用户需要
* - Scipy
- pip install scipy==1.0.0
- 模型测试需要
* - FileLock
- pip install filelock==3.0.0
- Android 运行需要
* - ONNX
- pip install onnx==1.5.0
- ONNX 模型需要
对于 Python 依赖,可直接执行,
.. code:: sh
pip install -U --user -r setup/optionals.txt
.. note::
- 对于安卓开发, 环境变量 `ANDROID_NDK_HOME` 需要指定 ``export ANDROID_NDK_HOME=/path/to/ndk``
- Mac 用户请先安装 Homebrew. 在 ``/etc/bashrc`` 中设置 `ANDROID_NDK_HOME`,之后执行 ``source /etc/bashrc``.
基本使用方法
=============================
确保已安装所需环境 (refer to :doc:`../installation/env_requirement`).
清空工作目录
-------------------------------
构建前,清空工作目录
.. code:: sh
tools/clear_workspace.sh
编译引擎
-------------------------------
确保 CMake 已安装
.. code:: sh
RUNTIME=GPU bash tools/cmake/cmake-build-armeabi-v7a.sh
编译安装位置为 ``build/cmake-build/armeabi-v7a``, 可以使用 libmace 静态库或者动态库。
除了 armeabi-v7,其他支持的 abi 包括: ``arm64-v8a``, ``arm-linux-gnueabihf``, ``aarch64-linux-gnu``, ``host``;
支持的目标设备 (RUNTIME) 包括: ``GPU``, ``HEXAGON``, ``HTA``, ``APU``.
转换模型
-------------------------------
撰写模型相关的 YAML 配置文件:
.. code:: yaml
models:
mobilenet_v1:
platform: tensorflow
model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v1/mobilenet-v1-1.0.pb
model_sha256_checksum: 71b10f540ece33c49a7b51f5d4095fc9bd78ce46ebf0300487b2ee23d71294e6
subgraphs:
- input_tensors:
- input
input_shapes:
- 1,224,224,3
output_tensors:
- MobilenetV1/Predictions/Reshape_1
output_shapes:
- 1,1001
runtime: gpu
假设模型配置文件的路径是: ``../mace-models/mobilenet-v1/mobilenet-v1.yml``,执行:
.. code:: yaml
python tools/python/convert.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml
将会在 ``build/mobilenet_v1/model/`` 中产生 4 个文件
.. code:: sh
├── mobilenet_v1.pb (模型结构文件)
├── mobilenet_v1.data (模型参数文件)
├── mobilenet_v1_index.html (可视化文件,可在浏览器中打开)
└── mobilenet_v1.pb_txt (模型结构文本文件,可以 vim 进行查看)
除了 tensorflow,还支持训练平台: ``caffe``, ``onnx``;
除了 gpu, 亦可指定运行设备为 ``cpu``, ``dsp``。
模型测试与性能评测
-------------------------------
我们提供了模型测试与性能评测工具。
模型转换后,执行下面命令进行测试:
.. code:: sh
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --validate
或下面命令进行性能评测:
.. code:: sh
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --benchmark
这两个命令将会自动在目标设备上测试模型,如果在移动设备上测试,请确保已经连接上。
如果想查看详细日志,可以提高日志级别,例如指定选项 ``--vlog_level=2``
集成模型到应用
--------------------------------------
可以查看源码 \ ``mace/tools/mace_run.cc``\ 了解更多详情。下面简要介绍相关步骤:
.. code:: cpp
// 添加头文件按
#include "mace/public/mace.h"
// 0. 指定目标设备
DeviceType device_type = DeviceType::GPU;
// 1. 运行配置
MaceStatus status;
MaceEngineConfig config(device_type);
std::shared_ptr<GPUContext> gpu_context;
// Set the path to store compiled OpenCL kernel binaries.
// please make sure your application have read/write rights of the directory.
// this is used to reduce the initialization time since the compiling is too slow.
// It's suggested to set this even when pre-compiled OpenCL program file is provided
// because the OpenCL version upgrade may also leads to kernel recompilations.
const std::string storage_path ="path/to/storage";
gpu_context = GPUContextBuilder()
.SetStoragePath(storage_path)
.Finalize();
config.SetGPUContext(gpu_context);
config.SetGPUHints(
static_cast<GPUPerfHint>(GPUPerfHint::PERF_NORMAL),
static_cast<GPUPriorityHint>(GPUPriorityHint::PRIORITY_LOW));
// 2. 指定输入输出节点
std::vector<std::string> input_names = {...};
std::vector<std::string> output_names = {...};
// 3. 创建引擎实例
std::shared_ptr<mace::MaceEngine> engine;
MaceStatus create_engine_status;
create_engine_status =
CreateMaceEngineFromProto(model_graph_proto,
model_graph_proto_size,
model_weights_data,
model_weights_data_size,
input_names,
output_names,
device_type,
&engine);
if (create_engine_status != MaceStatus::MACE_SUCCESS) {
// fall back to other strategy.
}
// 4. 创建输入输出缓存
std::map<std::string, mace::MaceTensor> inputs;
std::map<std::string, mace::MaceTensor> outputs;
for (size_t i = 0; i < input_count; ++i) {
// Allocate input and output
int64_t input_size =
std::accumulate(input_shapes[i].begin(), input_shapes[i].end(), 1,
std::multiplies<int64_t>());
auto buffer_in = std::shared_ptr<float>(new float[input_size],
std::default_delete<float[]>());
// 读取输入数据
// ...
inputs[input_names[i]] = mace::MaceTensor(input_shapes[i], buffer_in);
}
for (size_t i = 0; i < output_count; ++i) {
int64_t output_size =
std::accumulate(output_shapes[i].begin(), output_shapes[i].end(), 1,
std::multiplies<int64_t>());
auto buffer_out = std::shared_ptr<float>(new float[output_size],
std::default_delete<float[]>());
outputs[output_names[i]] = mace::MaceTensor(output_shapes[i], buffer_out);
}
// 5. 执行模型
MaceStatus status = engine.Run(inputs, &outputs);
更多信息可参考 :doc:`../../user_guide/advanced_usage_cmake`.
# MACE Build and Test Tools
## Clear Workspace
Before you do anything, clear the workspace used by build and test process.
```bash
tools/clear_workspace.sh
```
## Build Engine
Please make sure you have CMake installed.
```bash
RUNTIME=GPU bash tools/cmake/cmake-build-armeabi-v7a.sh
```
which generate libraries in `build/cmake-build/armeabi-v7a`, you can use either static libraries or the `libmace.so` shared library.
You can also build for other target abis.
The default build command builds engine that runs on CPU, you can modify the cmake file to support other hardware, or you can just set environment variable before building.
```bash
RUNTIME: GPU/HEXAGON/HTA/APU
```
## Model Conversion
When you have prepared your model, the first thing to do is write a model config.
```yaml
models:
mobilenet_v1:
platform: tensorflow
model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v1/mobilenet-v1-1.0.pb
model_sha256_checksum: 71b10f540ece33c49a7b51f5d4095fc9bd78ce46ebf0300487b2ee23d71294e6
subgraphs:
- input_tensors:
- input
input_shapes:
- 1,224,224,3
output_tensors:
- MobilenetV1/Predictions/Reshape_1
output_shapes:
- 1,1001
runtime: gpu
```
The following steps generate output to `build` directory which is the default build and test workspace.
Suppose you have the model config in `../mace-models/mobilenet-v1/mobilenet-v1.yml`. Then run
```bash
python tools/python/convert.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml
```
which generate 4 files in `build/mobilenet_v1/model/`
```
├── mobilenet_v1.pb (model file)
├── mobilenet_v1.data (param file)
├── mobilenet_v1_index.html (visualization page, you can open it in browser)
└── mobilenet_v1.pb_txt (model text file, which can be for debug use)
```
## Model Test and Benchmark
After model is converted, simply run
```bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --validate
```
Or benchmark the model
```bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --benchmark
```
It will test your model on the device configured in the model config (`runtime`).
You can also test on other device by specify `--runtime=cpu (dsp/hta/apu)` if you previously build engine for the device.
The log will be shown if `--vlog_level=2` is specified.
## Encrypt Model (optional)
Model can be encrypted by obfuscation.
```bash
python tools/python/encrypt.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml
```
It will override `mobilenet_v1.pb` and `mobilenet_v1.data`.
If you want to compiled the model into a library, you should use options `--gencode_model --gencode_param` to generate model code, i.e.,
```bash
python tools/python/encrypt.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --gencode_model --gencode_param
```
It will generate model code into `mace/codegen/models` and also generate a helper function `CreateMaceEngineFromCode` in `mace/codegen/engine/mace_engine_factory.h` by which you can create an engine with models built in it.
After that you can rebuild the engine.
```bash
RUNTIME=GPU RUNMODE=code bash tools/cmake/cmake-build-armeabi-v7a.sh
```
`RUNMODE=code` means you compile and link model library with MACE engine.
When you test the model in code format, you should specify it in the script as follows.
```bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --gencode_model --gencode_param
```
Of course you can generate model code only, and use parameter file.
## Precompile OpenCL (optional)
After you test model on GPU, it will generate compiled OpenCL binary file automatically in `build/mobilenet_v1/opencl` directory.
```bash
└── mobilenet_v1_compiled_opencl_kernel.MIX2S.sdm845.bin
```
It specifies your test platform model and SoC. You can use it in production to accelerate the initialization.
## Auto Tune OpenCL kernels (optional)
MACE can auto tune OpenCL kernels used by models. You can specify `--tune` option.
```bash
python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --tune
```
It will generate OpenCL tuned parameter binary file in `build/mobilenet_v1/opencl` directory.
```bash
└── mobilenet_v1_tuned_opencl_parameter.MIX2S.sdm845.bin
```
It specifies your test platform model and SoC. You can use it in production to reduce latency on GPU.
## Multi Model Support (optional)
If multiple models are configured in config file. After you test it, it will generate more than one tuned parameter files.
Then you need to merge them together.
```bash
python tools/python/gen_opencl.py
```
After that, it will generate one set of files into `build/opencl` directory.
```bash
├── compiled_opencl_kernel.bin
└── tuned_opencl_parameter.bin
```
You can also generate code into the engine by specify `--gencode`, after which you should rebuild the engine.
......@@ -18,7 +18,7 @@ Internal tool for mace_cc_benchmark, mace_cc_test:
python tools/python/run_target.py \
--target_abi=armeabi-v7a --target_socs=all --target_name=mace_cc_test \
--gtest_filter=EnvTest.* --envs="MACE_CPP_MIN_VLOG_LEVEL=5"
--gtest_filter=EnvTest.* --envs="MACE_CPP_MIN_VLOG_LEVEL=5
"""
from __future__ import absolute_import
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册