Here is an example deployment file with two models.
.. literalinclude:: models/demo_models_cmake.yml
:language: yaml
* **Configurations**
.. list-table::
:header-rows: 1
* - Options
- Usage
* - model_name
- model name should be unique if there are more than one models.
**LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.**
* - platform
- The source framework, tensorflow or caffe.
* - model_file_path
- The path of your model file which can be local path or remote URL.
* - model_sha256_checksum
- The SHA256 checksum of the model file.
* - weight_file_path
- [optional] The path of Caffe model weights file.
* - weight_sha256_checksum
- [optional] The SHA256 checksum of Caffe model weights file.
* - subgraphs
- subgraphs key. **DO NOT EDIT**
* - input_tensors
- The input tensor name(s) (tensorflow) or top name(s) of inputs' layer (caffe).
If there are more than one tensors, use one line for a tensor.
* - output_tensors
- The output tensor name(s) (tensorflow) or top name(s) of outputs' layer (caffe).
If there are more than one tensors, use one line for a tensor.
* - input_shapes
- The shapes of the input tensors, default is NHWC order.
* - output_shapes
- The shapes of the output tensors, default is NHWC order.
* - input_ranges
- The numerical range of the input tensors' data, default [-1, 1]. It is only for test.
* - validation_inputs_data
- [optional] Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used.
* - accuracy_validation_script
- [optional] Specify the accuracy validation script as a plugin to test accuracy, see `doc <#validate-accuracy-of-mace-model>`__.
* - validation_threshold
- [optional] Specify the similarity threshold for validation. A dict with key in 'CPU', 'GPU' and/or 'HEXAGON' and value <= 1.0.
* - backend
- The onnx backend framework for validation, could be [tensorflow, caffe2, pytorch], default is tensorflow.
* - runtime
- The running device, one of [cpu, gpu, dsp, cpu+gpu]. cpu+gpu contains CPU and GPU model definition so you can run the model on both CPU and GPU.
* - data_type
- [optional] The data type used for specified runtime. [fp16_fp32, fp32_fp32] for GPU, default is fp16_fp32, [fp32] for CPU and [uint8] for DSP.
* - input_data_types
- [optional] The input data type for specific op(eg. gather), which can be [int32, float32], default to float32.
* - input_data_formats
- [optional] The format of the input tensors, one of [NONE, NHWC, NCHW]. If there is no format of the input, please use NONE. If only one single format is specified, all inputs will use that format, default is NHWC order.
* - output_data_formats
- [optional] The format of the output tensors, one of [NONE, NHWC, NCHW]. If there is no format of the output, please use NONE. If only one single format is specified, all inputs will use that format, default is NHWC order.
* - limit_opencl_kernel_time
- [optional] Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default is 0.
* - opencl_queue_window_size
- [optional] Limit the max commands in OpenCL command queue to keep UI responsiveness, default is 0.
* - obfuscate
- [optional] Whether to obfuscate the model operator name, default to 0.
* - winograd
- [optional] Which type winograd to use, could be [0, 2, 4]. 0 for disable winograd, 2 and 4 for enable winograd, 4 may be faster than 2 but may take more memory.
.. note::
Some command tools:
.. code:: bash
# Get device's soc info.
adb shell getprop | grep platform
# command for generating sha256_sum
sha256sum /path/to/your/file
Advanced usage
--------------
There are three common advanced use cases:
- run your model on the embedded device(ARM LINUX)
- converting model to C++ code.
- tuning GPU kernels for a specific SoC.
Run you model on the embedded device(ARM Linux)
-----------------------------------------------
The way to run your model on the ARM Linux is nearly same as with android, except you need specify a device config file.
It will generate model code into ``mace/codegen/models`` and also generate a helper function ``CreateMaceEngineFromCode`` in ``mace/codegen/engine/mace_engine_factory.h`` by which you can create an engine with models built in it.
Of course you can generate model code only, and use parameter file.
When you need to integrate the libraries into your applications, you can link `libmace_static.a` and `libmodel.a` to your target. These are under the directory:
``build/cmake-build/armeabi-v7a/install/lib/``, the header files you need are under ``build/cmake-build/armeabi-v7a/install/include``.
Refer to \ ``mace/tools/mace_run.cc``\ for full usage. The following list the key steps.
.. code:: cpp
// Include the headers
#include "mace/public/mace.h"
// If the model_graph_format is code
#include "mace/public/${model_name}.h"
#include "mace/public/mace_engine_factory.h"
// ... Same with the code in basic usage
// 4. Create MaceEngine instance
std::shared_ptr<mace::MaceEngine> engine;
MaceStatus create_engine_status;
// Create Engine from compiled code
create_engine_status =
CreateMaceEngineFromCode(model_name.c_str(),
model_data_ptr, // nullptr if model_data_format is code
model_data_size, // 0 if model_data_format is code
input_names,
output_names,
device_type,
&engine);
if (create_engine_status != MaceStatus::MACE_SUCCESS) {
// Report error or fallback
}
// ... Same with the code in basic usage
Tuning for specific SoC's GPU
---------------------------------
If you want to use the GPU of a specific device, you can tune the performance for particular devices, which may get 1~10% performance improvement.
You can specify `--tune` option when you want to run and tune the performance at the same time.
which generate libraries in `build/cmake-build/armeabi-v7a`, you can use either static libraries or the `libmace.so` shared library.
You can also build for other target abis.
The default build command builds engine that runs on CPU, you can modify the cmake file to support other hardware, or you can just set environment variable before building.
```bash
RUNTIME: GPU/HEXAGON/HTA/APU
```
## Model Conversion
When you have prepared your model, the first thing to do is write a model config.
It will generate model code into `mace/codegen/models` and also generate a helper function `CreateMaceEngineFromCode` in `mace/codegen/engine/mace_engine_factory.h` by which you can create an engine with models built in it.