提交 a9bfbc26 编写于 作者: J Jared Duke 提交者: TensorFlower Gardener

Add an Android APK wrapper for the benchmark_model utility

This APK offers a more faithful representation of on-device performance
by executing the benchmark in the context of a foreground Activity.
See the README for more details.

PiperOrigin-RevId: 224602509
上级 602b6b86
......@@ -112,7 +112,8 @@ def tflite_jni_binary(
linkshared = 1,
linkstatic = 1,
testonly = 0,
deps = []):
deps = [],
srcs = []):
"""Builds a jni binary for TFLite."""
linkopts = linkopts + [
"-Wl,--version-script", # Export only jni functions & classes.
......@@ -124,6 +125,7 @@ def tflite_jni_binary(
linkshared = linkshared,
linkstatic = linkstatic,
deps = deps + [linkscript],
srcs = srcs,
linkopts = linkopts,
testonly = testonly,
)
......
......@@ -11,6 +11,11 @@ The instructions below are for running the binary on Desktop and Android,
for iOS please use the
[iOS benchmark app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark/ios).
An experimental Android APK wrapper for the benchmark model utility offers more
faithful execution behavior on Android (via a foreground Activity). It is
located
[here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark/android).
## Parameters
The binary takes the following required parameters:
......
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright 2018 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="org.tensorflow.lite.benchmark">
<!-- Necessary for loading custom models from disk. -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<!-- Target SDK 21 (<23) to avoid the need for requesting storage
permissions. This APK will almost always be used from the command-line
anyway, and be expicitly installed by the developer. -->
<uses-sdk
android:minSdkVersion="21"
android:targetSdkVersion="21" />
<application>
<!-- This Activity runs the TensorFlow Lite benchmark at creation, using
a provided set of arguments, then immediately terminates. -->
<activity android:name="org.tensorflow.lite.benchmark.BenchmarkModelActivity"
android:screenOrientation="portrait"
android:label="TFLite Benchmark"
android:theme="@android:style/Theme.NoDisplay"
android:exported="true"
android:noHistory="true" />
</application>
</manifest>
# Description:
# BenchmarkModel Android harness for TensorFlow Lite benchmarks.
package(default_visibility = ["//visibility:private"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
load("//tensorflow/lite:build_def.bzl", "tflite_jni_binary")
load("@build_bazel_rules_android//android:rules.bzl", "android_binary")
# See README.md for details about building and executing this benchmark.
android_binary(
name = "benchmark_model",
srcs = glob([
"src/**/*.java",
]),
custom_package = "org.tensorflow.lite.benchmark",
manifest = "AndroidManifest.xml",
# In some platforms we don't have an Android SDK/NDK and this target
# can't be built. We need to prevent the build system from trying to
# use the target in that case.
tags = ["manual"],
deps = [":tensorflowlite_benchmark_native"],
)
tflite_jni_binary(
name = "libtensorflowlite_benchmark.so",
srcs = glob([
"jni/**/*.cc",
"jni/**/*.h",
]),
deps = [
"//tensorflow/lite/java/jni",
"//tensorflow/lite/tools/benchmark:benchmark_tflite_model_lib",
"//tensorflow/lite/tools/benchmark:logging",
],
)
cc_library(
name = "tensorflowlite_benchmark_native",
srcs = ["libtensorflowlite_benchmark.so"],
visibility = ["//visibility:private"],
)
# TFLite Android Model Benchmark Tool
## Description
This Android benchmark app is a simple wrapper around the TensorFlow Lite
[command-line benchmark utility](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark).
Pushing and executing binaries directly on Android is a valid approach to
benchmarking, but it can result in subtle (but observable) differences in
performance relative to execution within an actual Android app. In particular,
Android's scheduler tailors behavior based on thread and process priorities,
which differ between a foreground Activity/Application and a regular background
binary executed via `adb shell ...`. This tailored behavior is most evident when
enabling multi-threaded CPU execution with TensorFlow Lite.
To that end, this app offers perhaps a more faithful view of runtime performance
that developers can expected when deploying TensorFlow Lite with their
application.
## To build/install/run
(0) Refer to
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
to edit the `WORKSPACE` to configure the android NDK/SDK.
(1) Build for your specific platform, e.g.:
```
bazel build -c opt \
--config=android_arm64 \
--cxxopt='--std=c++11' \
tensorflow/lite/tools/benchmark/android:benchmark_model
```
(2) Connect your phone. Install the benchmark APK to your phone with adb:
```
adb install -r -d bazel-bin/tensorflow/lite/tools/benchmark/android/benchmark_model.apk
```
(3) Push the compute graph that you need to test.
```
adb push mobilenet_quant_v1_224.tflite /data/local/tmp
```
(4) Run the benchmark. Additional command-line flags are documented
[here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark/README.md)
and can be appended to the `args` string alongside the required `--graph` flag
(note that all args must be nested in the single quoted string that follows the
args key).
```
adb shell am start -S -n
org.tensorflow.lite.benchmark/org.tensorflow.lite.benchmark.BenchmarkModelActivity \
--es args '"--graph=/data/local/tmp/mobilenet_quant_v1_224.tflite --num_threads=4"'
```
(5) The results will be available in Android's logcat, e.g.:
```
adb logcat | grep "Average inference"
... tflite : Average inference timings in us: Warmup: 91471, Init: 4108, Inference: 80660.1
```
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include <jni.h>
#include <sstream>
#include <string>
#include "tensorflow/lite/tools/benchmark/benchmark_tflite_model.h"
#include "tensorflow/lite/tools/benchmark/logging.h"
#ifdef __ANDROID__
#include <android/log.h>
#endif
namespace tflite {
namespace benchmark {
namespace {
class AndroidBenchmarkLoggingListener : public BenchmarkListener {
void OnBenchmarkEnd(const BenchmarkResults& results) override {
auto inference_us = results.inference_time_us();
auto init_us = results.startup_latency_us();
auto warmup_us = results.warmup_time_us();
std::stringstream results_output;
results_output << "Average inference timings in us: "
<< "Warmup: " << warmup_us.avg() << ", "
<< "Init: " << init_us << ", "
<< "Inference: " << inference_us.avg();
#ifdef __ANDROID__
__android_log_print(ANDROID_LOG_ERROR, "tflite", "%s",
results_output.str().c_str());
#else
fprintf(stderr, "%s", results_output.str().c_str());
#endif
}
};
void Run(int argc, char** argv) {
BenchmarkTfLiteModel benchmark;
AndroidBenchmarkLoggingListener listener;
benchmark.AddListener(&listener);
benchmark.Run(argc, argv);
}
} // namespace
} // namespace benchmark
} // namespace tflite
#ifdef __cplusplus
extern "C" {
#endif
JNIEXPORT void JNICALL
Java_org_tensorflow_lite_benchmark_BenchmarkModel_nativeRun(JNIEnv* env,
jclass clazz,
jstring args_obj) {
const char* args_chars = env->GetStringUTFChars(args_obj, nullptr);
// Split the args string into individual arg tokens.
std::istringstream iss(args_chars);
std::vector<std::string> args_split{std::istream_iterator<std::string>(iss),
{}};
// Construct a fake argv command-line object for the benchmark.
std::vector<char*> argv;
std::string arg0 = "(BenchmarkModelAndroid)";
argv.push_back(const_cast<char*>(arg0.data()));
for (auto& arg : args_split) {
argv.push_back(const_cast<char*>(arg.data()));
}
tflite::benchmark::Run(static_cast<int>(argv.size()), argv.data());
env->ReleaseStringUTFChars(args_obj, args_chars);
}
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
package org.tensorflow.lite.benchmark;
/** Helper class for running a native TensorFlow Lite benchmark. */
class BenchmarkModel {
static {
System.loadLibrary("tensorflowlite_benchmark");
}
// Executes a standard TensorFlow Lite benchmark according to the provided args.
//
// Note that {@code args} will be split by the native execution code.
public static void run(String args) {
nativeRun(args);
}
private static native void nativeRun(String args);
}
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
package org.tensorflow.lite.benchmark;
import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.util.Log;
/** Main {@code Activity} class for the benchmark app. */
public class BenchmarkModelActivity extends Activity {
private static final String TAG = "tflite_BenchmarkModelActivity";
private static final String ARGS_INTENT_KEY_0 = "args";
private static final String ARGS_INTENT_KEY_1 = "--args";
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Intent intent = getIntent();
Bundle bundle = intent.getExtras();
String args = bundle.getString(ARGS_INTENT_KEY_0, bundle.getString(ARGS_INTENT_KEY_1));
Log.i(TAG, "Running TensorFlow Lite benchmark with args: " + args);
BenchmarkModel.run(args);
finish();
}
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册