Using the AllOpsResolver will link all the TFLM operators into the executable, which will add significantly to the memory footprint.
Using the MicroMutableOpResolver will include the operators specified by the user. This generally requires manually finding out which operators are used in the model through the use of a visualization tool, which may be impractical in some cases.
This script will automatically generate a MicroMutableOpResolver with only the used operators for a given model.
The MicroMutableOpResolver includes the operators explictly specified in source code.
This generally requires manually finding out which operators are used in the model through the use of a visualization tool, which may be impractical in some cases.
This script will automatically generate a MicroMutableOpResolver with only the used operators for a given model or set of models.
## How to run
...
...
@@ -39,65 +39,46 @@ The generated header file can then be included in the application and used like
--input_tflite_file=<path to tflite file> --output_dir=<output directory>
```
Note that the generated test must be somewhere in tflite-micro and it also need to be run from there.
Also note that Bazel expects absolute paths.
Example:
```
cd /tmp/tflite-micro
bazel run gen_dir/person_detect:micro_mutable_op_resolver_test
```
This script verifies a single model at a time. It will generate a small inference testing app that is using the generated header file, which can then be executed and tested as a final step.
Because of this the specified output path will be appended with the name of the model so that the generated test is named after the model.
In other words the final output directory will be <outputdirectory>/<basenameofmodel>.
### Verifying the output of a given model
The essence of this is that different output paths need to be specified for the actual header script and the actual test script.
By default the model will run without any generated input or verifying the output. This can be done by adding the flag --verify_output=1.
So there will be 3 steps,
1) Generate the micro mutable specifying e.g. output path gen_dir/<base_name_of_model>
2) Generate the micro mutable specifying e.g. output path gen_dir
3) Run the generated test
Example assuming gen_dir and /tmp/my_model.tflite exists:
Example assuming /tmp/my_model.tflite exists:
```
# Step 1 generates header to gen_dir/my_model
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model -- \
bazel run gen_dir/my_model:micro_mutable_op_resolver_test
```
Note that since test script appends the name of the model in the output directory, we add that to the output directory for the generated header (gen_dir/my_model) so that header and test files ends up in same directory.
Depending on the size of the input model the arena size may need to be increased. Arena size can be set with --arena_size=<size>.
Example assuming gen_dir and /tmp/big_model.tflite exists:
```
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model -- \
bazel run gen_dir/big_model:micro_mutable_op_resolver_test
# Step 3 runs the generated my_model test
bazel run gen_dir/my_model:micro_mutable_op_resolver_test
```
Note1: Bazel expects absolute paths.
Note2: By default the inference model test will run without any generated input or verifying the output. Verifying output can be done with --verify_output=1, which is done in the example above.
Note3: Depending on the size of the model the arena size may need to be increased. Arena size can be set with --arena_size=<size>.