提交 b2a62764 编写于 作者: L Lu Wang 提交者: TensorFlower Gardener

Reference Task library in the model guides

PiperOrigin-RevId: 328242690
Change-Id: Icaaba6c49ed201090d8e2cfa42c7f666143e8927
上级 8c81dd1e
...@@ -408,8 +408,22 @@ Flatbuffers library. ...@@ -408,8 +408,22 @@ Flatbuffers library.
### Read the metadata in Java ### Read the metadata in Java
Note: the Java Metadata Extractor library is available as an Android library To use the Metadata Extractor library in your Android app, we recommend using
dependency: `org.tensorflow:tensorflow-lite-metadata`. the
[TensorFlow Lite Metadata AAR hosted at JCenter](https://bintray.com/google/tensorflow/tensorflow-lite-metadata).
It contains the `MetadataExtractor` class, as well as the FlatBuffers Java
bindings for the
[metadata schema](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/metadata_schema.fbs)
and the
[model schema](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs).
You can specify this in your `build.gradle` dependencies as follows:
```build
dependencies {
implementation 'org.tensorflow:tensorflow-lite-metadata:0.0.0-nightly'
}
```
You can initialize a `MetadataExtractor` object with a `ByteBuffer` that points You can initialize a `MetadataExtractor` object with a `ByteBuffer` that points
to the model: to the model:
......
...@@ -16,7 +16,7 @@ to continuously classify whatever it sees from the device's rear-facing camera. ...@@ -16,7 +16,7 @@ to continuously classify whatever it sees from the device's rear-facing camera.
The application can run either on device or emulator. The application can run either on device or emulator.
Inference is performed using the TensorFlow Lite Java API and the Inference is performed using the TensorFlow Lite Java API and the
[TensorFlow Lite Android Support Library](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/java/README.md). [TensorFlow Lite Android Support Library](../inference_with_metadata/lite_support.md).
The demo app classifies frames in real-time, displaying the top most probable The demo app classifies frames in real-time, displaying the top most probable
classifications. It allows the user to choose between a floating point or classifications. It allows the user to choose between a floating point or
[quantized](https://www.tensorflow.org/lite/performance/post_training_quantization) [quantized](https://www.tensorflow.org/lite/performance/post_training_quantization)
...@@ -41,6 +41,36 @@ as a starting point. ...@@ -41,6 +41,36 @@ as a starting point.
The following sections contain some useful information for working with The following sections contain some useful information for working with
TensorFlow Lite on Android. TensorFlow Lite on Android.
### Use the TensorFlow Lite Task Library
TensorFlow Lite Task Library contains a set of powerful and easy-to-use
task-specific libraries for app developers to create ML experiences with TFLite.
It provides optimized out-of-box model interfaces for popular machine learning
tasks, such as image classification, question and answer, etc. The model
interfaces are specifically designed for each task to achieve the best
performance and usability. Task Library works cross-platform and is supported on
Java, C++, and Swift (coming soon).
To use the Support Library in your Android app, we recommend using the AAR
hosted at JCenter for
[Task Vision library](https://bintray.com/google/tensorflow/tensorflow-lite-task-vision)
and
[Task Text library](https://bintray.com/google/tensorflow/tensorflow-lite-task-text)
, respectively.
You can specify this in your `build.gradle` dependencies as follows:
```build
dependencies {
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-task-text:0.0.0-nightly'
}
```
See the introduction in the
[TensorFlow Lite Task Library overview](../inference_with_metadata/task_library/overview.md)
for more details.
### Use the TensorFlow Lite Android Support Library ### Use the TensorFlow Lite Android Support Library
The TensorFlow Lite Android Support Library makes it easier to integrate models The TensorFlow Lite Android Support Library makes it easier to integrate models
...@@ -52,8 +82,19 @@ It supports common data formats for inputs and outputs, including images and ...@@ -52,8 +82,19 @@ It supports common data formats for inputs and outputs, including images and
arrays. It also provides pre- and post-processing units that perform tasks such arrays. It also provides pre- and post-processing units that perform tasks such
as image resizing and cropping. as image resizing and cropping.
To use the Support Library in your Android app, we recommend using the
[TensorFlow Lite Support Library AAR hosted at JCenter](https://bintray.com/google/tensorflow/tensorflow-lite-support).
You can specify this in your `build.gradle` dependencies as follows:
```build
dependencies {
implementation 'org.tensorflow:tensorflow-lite-support:0.0.0-nightly'
}
```
To get started, follow the instructions in the To get started, follow the instructions in the
[TensorFlow Lite Android Support Library README.md](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/java/README.md). [TensorFlow Lite Android Support Library](../inference_with_metadata/lite_support.md).
### Use the TensorFlow Lite AAR from JCenter ### Use the TensorFlow Lite AAR from JCenter
......
...@@ -4,8 +4,8 @@ The following is an incomplete list of pre-trained models optimized to work with ...@@ -4,8 +4,8 @@ The following is an incomplete list of pre-trained models optimized to work with
TensorFlow Lite. TensorFlow Lite.
To get started choosing a model, visit <a href="../models">Models</a> page with To get started choosing a model, visit <a href="../models">Models</a> page with
end-to-end examples, or pick a [TensorFlow Lite model from TensorFlow Hub] end-to-end examples, or pick a
(https://tfhub.dev/s?deployment-format=lite). [TensorFlow Lite model from TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite).
Note: The best model for a given application depends on your requirements. For Note: The best model for a given application depends on your requirements. For
example, some applications might benefit from higher accuracy, while others example, some applications might benefit from higher accuracy, while others
...@@ -16,6 +16,9 @@ models to find the optimal balance between size, performance, and accuracy. ...@@ -16,6 +16,9 @@ models to find the optimal balance between size, performance, and accuracy.
For more information about image classification, see For more information about image classification, see
<a href="../models/image_classification/overview.md">Image classification</a>. <a href="../models/image_classification/overview.md">Image classification</a>.
Explore the TensorFlow Lite Task Library for instructions about
[how to integrate image classification models](../inference_with_metadata/task_library/image_classifier)
in just a few lines of code.
### Quantized models ### Quantized models
...@@ -24,7 +27,8 @@ classification models offer the smallest model size and fastest performance, at ...@@ -24,7 +27,8 @@ classification models offer the smallest model size and fastest performance, at
the expense of accuracy. The performance values are measured on Pixel 3 on the expense of accuracy. The performance values are measured on Pixel 3 on
Android 10. Android 10.
You can find many [quantized models](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification&q=quantized) You can find many
[quantized models](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification&q=quantized)
from TensorFlow Hub and get more model information there. from TensorFlow Hub and get more model information there.
Model name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | NNAPI Model name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | NNAPI
...@@ -54,8 +58,8 @@ Inception_V4_quant | [paper](https://arxiv.org/abs/1602.07261), [tflite ...@@ -54,8 +58,8 @@ Inception_V4_quant | [paper](https://arxiv.org/abs/1602.07261), [tflite
Note: The model files include both TF Lite FlatBuffer and Tensorflow frozen Note: The model files include both TF Lite FlatBuffer and Tensorflow frozen
Graph. Graph.
Note: Performance numbers were benchmarked on Pixel-3 (Android 10). Note: Performance numbers were benchmarked on Pixel-3 (Android 10). Accuracy
Accuracy numbers were computed using the numbers were computed using the
[TFLite image classification evaluation tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification). [TFLite image classification evaluation tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification).
### Floating point models ### Floating point models
...@@ -65,7 +69,8 @@ performance. <a href="../performance/gpu">GPU acceleration</a> requires the use ...@@ -65,7 +69,8 @@ performance. <a href="../performance/gpu">GPU acceleration</a> requires the use
of floating point models. The performance values are measured on Pixel 3 on of floating point models. The performance values are measured on Pixel 3 on
Android 10. Android 10.
You can find many [image classification models](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification) You can find many
[image classification models](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification)
from TensorFlow Hub and get more model information there. from TensorFlow Hub and get more model information there.
Model name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | GPU | NNAPI Model name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | GPU | NNAPI
...@@ -102,8 +107,9 @@ The following image classification models were created using ...@@ -102,8 +107,9 @@ The following image classification models were created using
<a href="https://cloud.google.com/automl/">Cloud AutoML</a>. The performance <a href="https://cloud.google.com/automl/">Cloud AutoML</a>. The performance
values are measured on Pixel 3 on Android 10. values are measured on Pixel 3 on Android 10.
You can find these models in [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&q=MnasNet) You can find these models in
and get more model information there. [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&q=MnasNet) and get
more model information there.
Model Name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | GPU | NNAPI Model Name | Paper and model | Model size | Top-1 accuracy | Top-5 accuracy | CPU, 4 threads | GPU | NNAPI
---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------: | ---------: | -------------: | -------------: | -------------: | ------: | ----: ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------: | ---------: | -------------: | -------------: | -------------: | ------: | ----:
...@@ -116,16 +122,20 @@ MnasNet_1.0_192 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https: ...@@ -116,16 +122,20 @@ MnasNet_1.0_192 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https:
MnasNet_1.0_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_224_09_07_2018.tgz) | 17 Mb | 74.08% | 91.75% | 19.4 ms | 8.7 ms | 19 ms MnasNet_1.0_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.0_224_09_07_2018.tgz) | 17 Mb | 74.08% | 91.75% | 19.4 ms | 8.7 ms | 19 ms
MnasNet_1.3_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.3_224_09_07_2018.tgz) | 24 Mb | 75.24% | 92.55% | 27.9 ms | 10.6 ms | 22.0 ms MnasNet_1.3_224 | [paper](https://arxiv.org/abs/1807.11626), [tflite&pb](https://storage.cloud.google.com/download.tensorflow.org/models/tflite/mnasnet_1.3_224_09_07_2018.tgz) | 24 Mb | 75.24% | 92.55% | 27.9 ms | 10.6 ms | 22.0 ms
Note: Performance numbers were benchmarked on Pixel-3 (Android 10). Note: Performance numbers were benchmarked on Pixel-3 (Android 10). Accuracy
Accuracy numbers were computed using the numbers were computed using the
[TFLite image classification evaluation tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification). [TFLite image classification evaluation tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/imagenet_image_classification).
## Object detection ## Object detection
For more information about object detection, see For more information about object detection, see
<a href="../models/object_detection/overview.md">Object detection</a>. <a href="../models/object_detection/overview.md">Object detection</a>. Explore
the TensorFlow Lite Task Library for instructions about
[how to integrate object detection models](../inference_with_metadata/task_library/object_detector)
in just a few lines of code.
Please find [object detection models](https://tfhub.dev/s?deployment-format=lite&module-type=image-object-detection) Please find
[object detection models](https://tfhub.dev/s?deployment-format=lite&module-type=image-object-detection)
from TensorFlow Hub. from TensorFlow Hub.
## Pose estimation ## Pose estimation
...@@ -133,21 +143,29 @@ from TensorFlow Hub. ...@@ -133,21 +143,29 @@ from TensorFlow Hub.
For more information about pose estimation, see For more information about pose estimation, see
<a href="../models/pose_estimation/overview.md">Pose estimation</a>. <a href="../models/pose_estimation/overview.md">Pose estimation</a>.
Please find [pose estimation models](https://tfhub.dev/s?deployment-format=lite&module-type=image-pose-detection) Please find
[pose estimation models](https://tfhub.dev/s?deployment-format=lite&module-type=image-pose-detection)
from TensorFlow Hub. from TensorFlow Hub.
## Image segmentation ## Image segmentation
For more information about image segmentation, see For more information about image segmentation, see
<a href="../models/segmentation/overview.md">Segmentation</a>. <a href="../models/segmentation/overview.md">Segmentation</a>. Explore the
TensorFlow Lite Task Library for instructions about
[how to integrate image segmentation models](../inference_with_metadata/task_library/image_segmenter)
in just a few lines of code.
Please find [image segmentation models](https://tfhub.dev/s?deployment-format=lite&module-type=image-segmentation) Please find
[image segmentation models](https://tfhub.dev/s?deployment-format=lite&module-type=image-segmentation)
from TensorFlow Hub. from TensorFlow Hub.
## Question and Answer ## Question and Answer
For more information about text classification with Mobile BERT, see For more information about question and answer with MobileBERT, see
<a href="../models/bert_qa/overview.md">Question And Answer</a>. <a href="../models/bert_qa/overview.md">Question And Answer</a>. Explore the
TensorFlow Lite Task Library for instructions about
[how to integrate question and answer models](../inference_with_metadata/task_library/bert_question_answerer)
in just a few lines of code.
Please find [Mobile BERT model](https://tfhub.dev/tensorflow/mobilebert/1) from Please find [Mobile BERT model](https://tfhub.dev/tensorflow/mobilebert/1) from
TensorFlow Hub. TensorFlow Hub.
......
...@@ -40,6 +40,10 @@ dependencies { ...@@ -40,6 +40,10 @@ dependencies {
} }
``` ```
Explore the
[TensorFlow Lite Support Library AAR hosted at JCenter](https://bintray.com/google/tensorflow/tensorflow-lite-support)
for different versions of the Support Library.
### Basic image manipulation and conversion ### Basic image manipulation and conversion
The TensorFlow Lite Support Library has a suite of basic image manipulation The TensorFlow Lite Support Library has a suite of basic image manipulation
......
# Bert natural language classifier # Integrate BERT natural language classifier
The Task Library `BertNLClassifier` API is very similar to the `NLClassifier` The Task Library `BertNLClassifier` API is very similar to the `NLClassifier`
that classifies input text into different categories, except that this API is that classifies input text into different categories, except that this API is
......
# Bert question answerer # Integrate BERT question answerer
The Task Library `BertQuestionAnswerer` API loads a Bert model and answers The Task Library `BertQuestionAnswerer` API loads a Bert model and answers
questions based on the content of a given passage. For more information, see the questions based on the content of a given passage. For more information, see the
...@@ -21,10 +21,7 @@ The following models are compatible with the `BertNLClassifier` API. ...@@ -21,10 +21,7 @@ The following models are compatible with the `BertNLClassifier` API.
[TensorFlow Lite Model Maker for Question Answer](https://www.tensorflow.org/lite/tutorials/model_maker_question_answer). [TensorFlow Lite Model Maker for Question Answer](https://www.tensorflow.org/lite/tutorials/model_maker_question_answer).
* The * The
[pretrained ALBERT models on TensorFlow Hub](https://tfhub.dev/tensorflow/albert_lite_base/1). [pretrained BERT models on TensorFlow Hub](https://tfhub.dev/tensorflow/collections/lite/task-library/bert-question-answerer/1).
* The
[pretrained MobileBERT models on TensorFlow Hub](https://tfhub.dev/tensorflow/tfjs-model/mobilebert/1).
* Custom models that meet the * Custom models that meet the
[model compatibility requirements](#model-compatibility-requirements). [model compatibility requirements](#model-compatibility-requirements).
......
...@@ -40,7 +40,7 @@ API. ...@@ -40,7 +40,7 @@ API.
[pretrained image classification models from TensorFlow Lite Hosted Models](https://www.tensorflow.org/lite/guide/hosted_models#image_classification). [pretrained image classification models from TensorFlow Lite Hosted Models](https://www.tensorflow.org/lite/guide/hosted_models#image_classification).
* The * The
[pretrained image classification models on TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification). [pretrained image classification models on TensorFlow Hub](https://tfhub.dev/tensorflow/collections/lite/task-library/image-classifier/1).
* Models created by * Models created by
[AutoML Vision Edge Image Classification](https://cloud.google.com/vision/automl/docs/edge-quickstart). [AutoML Vision Edge Image Classification](https://cloud.google.com/vision/automl/docs/edge-quickstart).
......
...@@ -29,7 +29,7 @@ The following models are guaranteed to be compatible with the `ImageSegmenter` ...@@ -29,7 +29,7 @@ The following models are guaranteed to be compatible with the `ImageSegmenter`
API. API.
* The * The
[pretrained image segmentation models on TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&module-type=image-segmentation). [pretrained image segmentation models on TensorFlow Hub](https://tfhub.dev/tensorflow/collections/lite/task-library/image-segmenter/1).
* Custom models that meet the * Custom models that meet the
[model compatibility requirements](#model-compatibility-requirements). [model compatibility requirements](#model-compatibility-requirements).
......
# Natural language classifier # Integrate Natural language classifier
The Task Library's `NLClassifier` API classifies input text into different The Task Library's `NLClassifier` API classifies input text into different
categories, and is a versatile and configurable API that can handle most text categories, and is a versatile and configurable API that can handle most text
......
...@@ -32,7 +32,7 @@ The following models are guaranteed to be compatible with the `ObjectDetector` ...@@ -32,7 +32,7 @@ The following models are guaranteed to be compatible with the `ObjectDetector`
API. API.
* The * The
[pretrained object detection models on TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&module-type=image-object-detection&publisher=google,tensorflow). [pretrained object detection models on TensorFlow Hub](https://tfhub.dev/tensorflow/collections/lite/task-library/object-detector/1).
* Models created by * Models created by
[AutoML Vision Edge Object Detection](https://cloud.google.com/vision/automl/object-detection/docs). [AutoML Vision Edge Object Detection](https://cloud.google.com/vision/automl/object-detection/docs).
......
...@@ -6,7 +6,7 @@ It provides optimized out-of-box model interfaces for popular machine learning ...@@ -6,7 +6,7 @@ It provides optimized out-of-box model interfaces for popular machine learning
tasks, such as image classification, question and answer, etc. The model tasks, such as image classification, question and answer, etc. The model
interfaces are specifically designed for each task to achieve the best interfaces are specifically designed for each task to achieve the best
performance and usability. Task Library works cross-platform and is supported on performance and usability. Task Library works cross-platform and is supported on
Java, C++, and Swift(coming soon). Java, C++, and Swift (coming soon).
## What to expect from the Task Library ## What to expect from the Task Library
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册