未验证 提交 a6e6a222 编写于 作者: M Måns Nilsson 提交者: GitHub

Remove reference to all ops resolver in micro mutable gen script (#1997)

BUG=Reference to all ops resolver should be removed
上级 6f24535e
# Generate Micro Mutable Op Resolver from a model
Using the AllOpsResolver will link all the TFLM operators into the executable, which will add significantly to the memory footprint.
Using the MicroMutableOpResolver will include the operators specified by the user. This generally requires manually finding out which operators are used in the model through the use of a visualization tool, which may be impractical in some cases.
This script will automatically generate a MicroMutableOpResolver with only the used operators for a given model.
The MicroMutableOpResolver includes the operators explictly specified in source code.
This generally requires manually finding out which operators are used in the model through the use of a visualization tool, which may be impractical in some cases.
This script will automatically generate a MicroMutableOpResolver with only the used operators for a given model or set of models.
## How to run
......@@ -39,65 +39,46 @@ The generated header file can then be included in the application and used like
tflite::MicroMutableOpResolver<kNumberOperators> op_resolver = get_resolver();
```
## How to test
## Verifying the content of the generated header file
Another script can be used to verify the generated header file. It will then verify a single model at a time.
This is just to test the actual script that generates the micro mutable ops resolver header for a given model.
So that the actual list of operators corresponds to a given model and that the syntax of the header is correct.
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model_test -- \
--input_tflite_file=<path to tflite file> --output_dir=<output directory>
Note that final output directory will be <output directory>/<base name of model>.
Example:
For this another script can be used to verify the generated header file:
```
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model_test -- \
--input_tflite_file=/tmp/person_detect.tflite --output_dir=/tmp/tflite-micro/gen_dir
--input_tflite_file=<path to tflite file> --output_dir=<output directory>
```
Note that the generated test must be somewhere in tflite-micro and it also need to be run from there.
Also note that Bazel expects absolute paths.
Example:
```
cd /tmp/tflite-micro
bazel run gen_dir/person_detect:micro_mutable_op_resolver_test
```
This script verifies a single model at a time. It will generate a small inference testing app that is using the generated header file, which can then be executed and tested as a final step.
Because of this the specified output path will be appended with the name of the model so that the generated test is named after the model.
In other words the final output directory will be <output directory>/<base name of model>.
### Verifying the output of a given model
The essence of this is that different output paths need to be specified for the actual header script and the actual test script.
By default the model will run without any generated input or verifying the output. This can be done by adding the flag --verify_output=1.
So there will be 3 steps,
1) Generate the micro mutable specifying e.g. output path gen_dir/<base_name_of_model>
2) Generate the micro mutable specifying e.g. output path gen_dir
3) Run the generated test
Example assuming gen_dir and /tmp/my_model.tflite exists:
Example assuming /tmp/my_model.tflite exists:
```
# Step 1 generates header to gen_dir/my_model
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model -- \
--common_tflite_path=/tmp/ \
--input_tflite_files=my_model.tflite --output_dir=$(realpath gen_dir/my_model)
# Step 2 generates test app using header from step 1 to gen_dir/my_model since my my_model is appended
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model_test -- \
--input_tflite_file=/tmp/my_model.tflite --output_dir=$(realpath gen_dir) --verify_output=1
bazel run gen_dir/my_model:micro_mutable_op_resolver_test
```
Note that since test script appends the name of the model in the output directory, we add that to the output directory for the generated header (gen_dir/my_model) so that header and test files ends up in same directory.
Depending on the size of the input model the arena size may need to be increased. Arena size can be set with --arena_size=<size>.
Example assuming gen_dir and /tmp/big_model.tflite exists:
```
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model -- \
--common_tflite_path=/tmp/ \
--input_tflite_files=big_model.tflite --output_dir=$(realpath gen_dir/big_model)
bazel run tensorflow/lite/micro/tools/gen_micro_mutable_op_resolver:generate_micro_mutable_op_resolver_from_model_test -- \
--input_tflite_file=/tmp/big_model.tflite --output_dir=$(realpath gen_dir) --verify_output=1 --arena_size=1000000
bazel run gen_dir/big_model:micro_mutable_op_resolver_test
# Step 3 runs the generated my_model test
bazel run gen_dir/my_model:micro_mutable_op_resolver_test
```
Note1: Bazel expects absolute paths.
Note2: By default the inference model test will run without any generated input or verifying the output. Verifying output can be done with --verify_output=1, which is done in the example above.
Note3: Depending on the size of the model the arena size may need to be increased. Arena size can be set with --arena_size=<size>.
/* Copyright 2022 The TensorFlow Authors. All Rights Reserved.
/* Copyright 2023 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
......@@ -18,7 +18,6 @@ limitations under the License.
#include <string.h>
#include "tensorflow/lite/c/common.h"
#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/micro/micro_profiler.h"
#include "tensorflow/lite/micro/recording_micro_allocator.h"
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册