...
 
Commits (11)
    https://gitcode.net/xusiwei1236/tflite-micro/-/commit/d2c4ef56f7a67db2da97f42de1f67e404f13eaad Automated sync from github.com/tensorflow/tensorflow (#2080) 2023-06-26T07:52:25+00:00 TFLM-bot tflm-github-bot@google.com BUG=automated sync from upstream NO_CHECK_TFLITE_FILES=automated sync from upstream https://gitcode.net/xusiwei1236/tflite-micro/-/commit/360f05d2943252e02b02642612d42c2885ca5158 Signal Library OPs python BUILD update (#2088) 2023-06-27T20:25:59+00:00 suleshahid 110432064+suleshahid@users.noreply.github.com This is needed to properly build all the ops together. Since we are calling into a singular utils function to load the ops, if it checks for any other ops, it would break before this. Now we make it build all ops everytime utils.py is built. BUG=[288938993](<a href="http://b/288938993" rel="nofollow noreferrer noopener" target="_blank">http://b/288938993</a>) https://gitcode.net/xusiwei1236/tflite-micro/-/commit/5d2b26d85ea176d0b735ae39f5dacf762e55e97e Automated sync from github.com/tensorflow/tensorflow (#2087) 2023-06-27T21:48:58+00:00 TFLM-bot tflm-github-bot@google.com BUG=automated sync from upstream NO_CHECK_TFLITE_FILES=automated sync from upstream https://gitcode.net/xusiwei1236/tflite-micro/-/commit/d8b44a5f5f4b3bc0e6138f2cbf57d2055f2b4ea1 IF kernel Eval not using TfLiteEvalTensor (#2086) 2023-06-28T00:01:32+00:00 David Davis ddavis-2015@users.noreply.github.com @tensorflow/micro The IF kernel has a non-compliant implementation. Eval phase is not using TfLiteEvalTensor. The `cond_value` variable should be declared `const` bug=fixes #2043 https://gitcode.net/xusiwei1236/tflite-micro/-/commit/8ace118b04cc0e50a88a3ed45ef53516a95ec0d1 Update Signal RFFT to have namespace and reformat Makefile targets (#2089) 2023-06-28T04:12:49+00:00 suleshahid 110432064+suleshahid@users.noreply.github.com This is needed to avoid name clashes and properly build all targets. The make command names to run the signal tests have changed to use `signal_` before the op. So: `test_kernel_fft_test` is now `test_kernel_signal_fft_test`, etc. BUG=[288938993](<a href="http://b/288938993" rel="nofollow noreferrer noopener" target="_blank">http://b/288938993</a>) https://gitcode.net/xusiwei1236/tflite-micro/-/commit/db846aa96a9bb0c6de39865a8cf5bd308145de75 Remove unnecessary defines for Hexagon reference kernel build. (#2091) 2023-06-28T16:15:29+00:00 Advait Jain advaitjain@users.noreply.github.com These were preventing reference kernels from being built with `TARGET=hexagon`. See <a href="http://b/281821666#comment5" rel="nofollow noreferrer noopener" target="_blank">http://b/281821666#comment5</a> for additional context. Manually tested that the following command passes with this change: ``` make -f tensorflow/lite/micro/tools/make/Makefile TARGET=hexagon -j8 test ``` And without this change we get the following error: ``` tensorflow/lite/micro/kernels/fully_connected.cc:74:49: error: no member named 'filter_buffer_index' in 'tflite::OpDataFullyConnected' &amp;data-&gt;filter_buffer_index); ~~~~ ^ tensorflow/lite/micro/kernels/fully_connected.cc:129:55: error: no member named 'filter_buffer_index' in 'tflite::OpDataFullyConnected' context-&gt;GetScratchBuffer(context, data.filter_buffer_index)); ~~~~ ^ ``` Note that we do not have any CI coverage for reference kernels with Hexagon since the expectation is that everyone should use the optimized kernels. However, we still provide a best-effort attempt to explicitly use the reference kernels, if needed. BUG=<a href="http://b/281821666" rel="nofollow noreferrer noopener" target="_blank">http://b/281821666</a> https://gitcode.net/xusiwei1236/tflite-micro/-/commit/1ec11efcbb362b4e2c7d8ff7bd71c28dff98b610 Update docker containers for Xtensa workflows. (#2095) 2023-06-29T22:03:45-07:00 Advait Jain advaitjain@users.noreply.github.com As part of reducing the size of the docker containers for the Xtensa tests, we have split them up based on different versions of the toolchain. This PR changes the github workflows to use the smaller and more targeted docker containers. Couple of things to note: * This PR is not pinning to a specific version of the docker container. This will make it easier to push updates if needed. * the names of the containers might change again. We want to get somethign going quickly so that CI is green again and PRs can be merged. * This PR will be merged bypassing the branch protection rules since the Xtensa workflows run as pull_request_target. We will only be able to test the changes in this PR after the merge. BUG=<a href="http://b/289098887" rel="nofollow noreferrer noopener" target="_blank">http://b/289098887</a> https://gitcode.net/xusiwei1236/tflite-micro/-/commit/4510f61f8abe7c144951cc7a08c954697d32c285 Add missing tag to docker image. (#2097) 2023-06-29T22:10:32-07:00 Advait Jain advaitjain@users.noreply.github.com I got this wrong in #2095. Turns out that I need to specifcy the tag. BUG=<a href="http://b/289098887" rel="nofollow noreferrer noopener" target="_blank">http://b/289098887</a> https://gitcode.net/xusiwei1236/tflite-micro/-/commit/7a3ef35d1459986127e0ff13702e487d66aa6e70 Remove leftover riscv32_mcu files (#2090) 2023-06-30T08:01:41+00:00 RJ Ascani rjascani@google.com There were a few leftover files after PR #2061 for the riscv32_mcu target. Since that target is no longer buildable, these files are not needed. This PR removes those remaining files. BUG=<a href="http://b/286622884" rel="nofollow noreferrer noopener" target="_blank">http://b/286622884</a> https://gitcode.net/xusiwei1236/tflite-micro/-/commit/7125afac4d71ff7fe90abb5b198d882ab635a9a0 Update the train micro speech model code for Tensorflow 2.x compatibi… (#2098) 2023-06-30T08:32:45+00:00 pjpratik 118897289+pjpratik@users.noreply.github.com …lity The latest colab does not support running 1.x version. The code has been updated to be able to run with 2.x version in the colab. Thanks. BUG=cleanup https://gitcode.net/xusiwei1236/tflite-micro/-/commit/13cd6d82406728fe38880de20933e765ebe0b59d Automated sync from github.com/tensorflow/tensorflow (#2093) 2023-06-30T17:25:43+00:00 TFLM-bot tflm-github-bot@google.com BUG=automated sync from upstream NO_CHECK_TFLITE_FILES=automated sync from upstream
......@@ -31,7 +31,7 @@ jobs:
- run: |
rm -rf .git
echo ${{ secrets.tflm-bot-token }} | docker login ghcr.io -u tflm-bot --password-stdin
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa:0.6 \
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa_xplorer_13:0.1 \
/bin/bash -c \
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_fusion_f1.sh EXTERNAL tflite-micro/"
......@@ -46,7 +46,7 @@ jobs:
- run: |
rm -rf .git
echo ${{ secrets.tflm-bot-token }} | docker login ghcr.io -u tflm-bot --password-stdin
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa:0.6 \
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa_xplorer_13:0.1 \
/bin/bash -c \
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_vision_p6.sh RUN_TESTS tflite-micro/"
......@@ -61,6 +61,6 @@ jobs:
- run: |
rm -rf .git
echo ${{ secrets.tflm-bot-token }} | docker login ghcr.io -u tflm-bot --password-stdin
docker run --env XTENSA_TOOLS_VERSION=RI-2019.2-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa:0.6 \
docker run --env XTENSA_TOOLS_VERSION=RI-2019.2-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa_xplorer_11:0.1 \
/bin/bash -c \
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_hifimini.sh tflite-micro/"
\ No newline at end of file
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_hifimini.sh tflite-micro/"
......@@ -23,7 +23,7 @@ jobs:
vision_p6_presubmit:
runs-on: ubuntu-latest
name: Vision P6 Build (presubmit)
steps:
- uses: actions/checkout@v2
......@@ -32,7 +32,7 @@ jobs:
- run: |
rm -rf .git
echo ${{ secrets.tflm-bot-token }} | docker login ghcr.io -u tflm-bot --password-stdin
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa:0.6 \
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa_xplorer_13:0.1 \
/bin/bash -c \
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_vision_p6.sh RUN_NO_TESTS tflite-micro/"
......@@ -47,7 +47,7 @@ jobs:
- run: |
rm -rf .git
echo ${{ secrets.tflm-bot-token }} | docker login ghcr.io -u tflm-bot --password-stdin
docker run --env XTENSA_TOOLS_VERSION=RI-2022.9-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa:0.6 \
docker run --env XTENSA_TOOLS_VERSION=RI-2022.9-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa_hifi5:0.1 \
/bin/bash -c \
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_hifi5.sh tflite-micro/"
......@@ -62,6 +62,6 @@ jobs:
- run: |
rm -rf .git
echo ${{ secrets.tflm-bot-token }} | docker login ghcr.io -u tflm-bot --password-stdin
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa:0.6 \
docker run --env XTENSA_TOOLS_VERSION=RI-2020.4-linux --rm -v `pwd`:/opt/tflite-micro ghcr.io/tflm-bot/xtensa_xplorer_13:0.1 \
/bin/bash -c \
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_hifi3z.sh EXTERNAL tflite-micro/"
\ No newline at end of file
"cd /opt && tflite-micro/tensorflow/lite/micro/tools/ci_build/test_xtensa_hifi3z.sh EXTERNAL tflite-micro/"
......@@ -27,6 +27,7 @@ py_library(
"ops/__init__.py",
],
srcs_version = "PY3",
visibility = ["//python/tflite_micro/signal/utils:__subpackages__"],
deps = [
":fft_ops",
":window_op",
......
......@@ -13,6 +13,7 @@ py_library(
srcs = ["util.py"],
deps = [
"//python/tflite_micro:runtime",
"//python/tflite_micro/signal:ops",
requirement("tensorflow-cpu"),
],
)
......@@ -142,16 +142,16 @@ void* InitAll(TfLiteContext* context, const char* buffer, size_t length) {
switch (tensor_type) {
case TensorType_INT16: {
return Init<int16_t, RfftInt16GetNeededMemory, RfftInt16Init>(
context, buffer, length);
return Init<int16_t, ::tflm_signal::RfftInt16GetNeededMemory,
::tflm_signal::RfftInt16Init>(context, buffer, length);
}
case TensorType_INT32: {
return Init<int32_t, RfftInt32GetNeededMemory, RfftInt32Init>(
context, buffer, length);
return Init<int32_t, ::tflm_signal::RfftInt32GetNeededMemory,
::tflm_signal::RfftInt32Init>(context, buffer, length);
}
case TensorType_FLOAT32: {
return Init<float, RfftFloatGetNeededMemory, RfftFloatInit>(
context, buffer, length);
return Init<float, ::tflm_signal::RfftFloatGetNeededMemory,
::tflm_signal::RfftFloatInit>(context, buffer, length);
}
default:
return nullptr;
......@@ -183,13 +183,13 @@ TfLiteStatus EvalAll(TfLiteContext* context, TfLiteNode* node) {
switch (params->fft_type) {
case kTfLiteInt16: {
return Eval<int16_t, RfftInt16Apply>(context, node);
return Eval<int16_t, ::tflm_signal::RfftInt16Apply>(context, node);
}
case kTfLiteInt32: {
return Eval<int32_t, RfftInt32Apply>(context, node);
return Eval<int32_t, ::tflm_signal::RfftInt32Apply>(context, node);
}
case kTfLiteFloat32: {
return Eval<float, RfftFloatApply>(context, node);
return Eval<float, ::tflm_signal::RfftFloatApply>(context, node);
}
default:
return kTfLiteError;
......@@ -208,22 +208,28 @@ TFLMRegistration* Register_RFFT() {
TFLMRegistration* Register_RFFT_FLOAT() {
static TFLMRegistration r = tflite::micro::RegisterOp(
Init<float, RfftFloatGetNeededMemory, RfftFloatInit>,
Prepare<float, kTfLiteFloat32>, Eval<float, RfftFloatApply>);
Init<float, ::tflm_signal::RfftFloatGetNeededMemory,
::tflm_signal::RfftFloatInit>,
Prepare<float, kTfLiteFloat32>,
Eval<float, ::tflm_signal::RfftFloatApply>);
return &r;
}
TFLMRegistration* Register_RFFT_INT16() {
static TFLMRegistration r = tflite::micro::RegisterOp(
Init<int16_t, RfftInt16GetNeededMemory, RfftInt16Init>,
Prepare<int16_t, kTfLiteInt16>, Eval<int16_t, RfftInt16Apply>);
Init<int16_t, ::tflm_signal::RfftInt16GetNeededMemory,
::tflm_signal::RfftInt16Init>,
Prepare<int16_t, kTfLiteInt16>,
Eval<int16_t, ::tflm_signal::RfftInt16Apply>);
return &r;
}
TFLMRegistration* Register_RFFT_INT32() {
static TFLMRegistration r = tflite::micro::RegisterOp(
Init<int32_t, RfftInt32GetNeededMemory, RfftInt32Init>,
Prepare<int32_t, kTfLiteInt32>, Eval<int32_t, RfftInt32Apply>);
Init<int32_t, ::tflm_signal::RfftInt32GetNeededMemory,
::tflm_signal::RfftInt32Init>,
Prepare<int32_t, kTfLiteInt32>,
Eval<int32_t, ::tflm_signal::RfftInt32Apply>);
return &r;
}
......
......@@ -21,6 +21,9 @@ limitations under the License.
#include "signal/src/complex.h"
// TODO(b/286250473): remove namespace once de-duped libraries
namespace tflm_signal {
// RFFT (Real Fast Fourier Transform)
// FFT for real valued time domain inputs.
......@@ -77,4 +80,6 @@ void* RfftFloatInit(int32_t fft_length, void* state, size_t state_size);
// * `output` must be of size (`fft_length` * 2) + 1 elements
void RfftFloatApply(void* state, const float* input, Complex<float>* output);
} // namespace tflm_signal
#endif // SIGNAL_SRC_RFFT_H_
......@@ -20,6 +20,9 @@ limitations under the License.
#include "signal/src/kiss_fft_wrappers/kiss_fft_float.h"
#include "signal/src/rfft.h"
// TODO(b/286250473): remove namespace once de-duped libraries
namespace tflm_signal {
size_t RfftFloatGetNeededMemory(int32_t fft_length) {
size_t state_size = 0;
kiss_fft_float::kiss_fftr_alloc(fft_length, 0, nullptr, &state_size);
......@@ -36,3 +39,5 @@ void RfftFloatApply(void* state, const float* input, Complex<float>* output) {
reinterpret_cast<const kiss_fft_scalar*>(input),
reinterpret_cast<kiss_fft_float::kiss_fft_cpx*>(output));
}
} // namespace tflm_signal
......@@ -20,6 +20,9 @@ limitations under the License.
#include "signal/src/kiss_fft_wrappers/kiss_fft_int16.h"
#include "signal/src/rfft.h"
// TODO(b/286250473): remove namespace once de-duped libraries
namespace tflm_signal {
size_t RfftInt16GetNeededMemory(int32_t fft_length) {
size_t state_size = 0;
kiss_fft_fixed16::kiss_fftr_alloc(fft_length, 0, nullptr, &state_size);
......@@ -37,3 +40,5 @@ void RfftInt16Apply(void* state, const int16_t* input,
reinterpret_cast<const kiss_fft_scalar*>(input),
reinterpret_cast<kiss_fft_fixed16::kiss_fft_cpx*>(output));
}
} // namespace tflm_signal
......@@ -20,6 +20,9 @@ limitations under the License.
#include "signal/src/kiss_fft_wrappers/kiss_fft_int32.h"
#include "signal/src/rfft.h"
// TODO(b/286250473): remove namespace once de-duped libraries
namespace tflm_signal {
size_t RfftInt32GetNeededMemory(int32_t fft_length) {
size_t state_size = 0;
kiss_fft_fixed32::kiss_fftr_alloc(fft_length, 0, nullptr, &state_size);
......@@ -36,4 +39,6 @@ void RfftInt32Apply(void* state, const int32_t* input,
static_cast<kiss_fft_fixed32::kiss_fftr_cfg>(state),
reinterpret_cast<const kiss_fft_scalar*>(input),
reinterpret_cast<kiss_fft_fixed32::kiss_fft_cpx*>(output));
}
\ No newline at end of file
}
} // namespace tflm_signal
......@@ -82,21 +82,24 @@ class RfftOp : public tensorflow::OpKernel {
};
// TODO(b/286250473): change back name after name clash resolved
REGISTER_KERNEL_BUILDER(Name("SignalRfft")
.Device(tensorflow::DEVICE_CPU)
.TypeConstraint<float>("T"),
RfftOp<float, DT_FLOAT, RfftFloatGetNeededMemory,
RfftFloatInit, RfftFloatApply>);
REGISTER_KERNEL_BUILDER(Name("SignalRfft")
.Device(tensorflow::DEVICE_CPU)
.TypeConstraint<int16>("T"),
RfftOp<int16_t, DT_INT16, RfftInt16GetNeededMemory,
RfftInt16Init, RfftInt16Apply>);
REGISTER_KERNEL_BUILDER(Name("SignalRfft")
.Device(tensorflow::DEVICE_CPU)
.TypeConstraint<int32>("T"),
RfftOp<int32_t, DT_INT32, RfftInt32GetNeededMemory,
RfftInt32Init, RfftInt32Apply>);
REGISTER_KERNEL_BUILDER(
Name("SignalRfft")
.Device(tensorflow::DEVICE_CPU)
.TypeConstraint<float>("T"),
RfftOp<float, DT_FLOAT, ::tflm_signal::RfftFloatGetNeededMemory,
::tflm_signal::RfftFloatInit, ::tflm_signal::RfftFloatApply>);
REGISTER_KERNEL_BUILDER(
Name("SignalRfft")
.Device(tensorflow::DEVICE_CPU)
.TypeConstraint<int16>("T"),
RfftOp<int16_t, DT_INT16, ::tflm_signal::RfftInt16GetNeededMemory,
::tflm_signal::RfftInt16Init, ::tflm_signal::RfftInt16Apply>);
REGISTER_KERNEL_BUILDER(
Name("SignalRfft")
.Device(tensorflow::DEVICE_CPU)
.TypeConstraint<int32>("T"),
RfftOp<int32_t, DT_INT32, ::tflm_signal::RfftInt32GetNeededMemory,
::tflm_signal::RfftInt32Init, ::tflm_signal::RfftInt32Apply>);
} // namespace signal
} // namespace tensorflow
\ No newline at end of file
......@@ -82,23 +82,56 @@ inline FloatArrayUniquePtr BuildTfLiteArray<float>(const int size) {
return FloatArrayUniquePtr(TfLiteFloatArrayCreate(size));
}
// Allocates a TFLiteArray of given size and initializes it.
// Allocates a TFLiteArray of given size and initializes it with the given
// values.
//
// `values` is expected to holds `size` elements.
template <class T>
TfLiteArrayUniquePtr<T> BuildTfLiteArray(const int size,
const T* const values) {
auto array = BuildTfLiteArray<T>(size);
if (array) {
memcpy(array->data, values, size * sizeof(T));
//
// If T is explicitely specified and the type of values is not the same as T,
// then a static_cast is performed.
template <class T = void, class U,
class Type = std::conditional_t<std::is_same<T, void>::value, U, T>>
TfLiteArrayUniquePtr<Type> BuildTfLiteArray(const int size,
const U* const values) {
TfLiteArrayUniquePtr<Type> array = BuildTfLiteArray<Type>(size);
// If size is 0, the array pointer may be null.
if (array && values) {
if (std::is_same<Type, U>::value) {
memcpy(array->data, values, size * sizeof(Type));
} else {
for (int i = 0; i < size; ++i) {
array->data[i] = static_cast<Type>(values[i]);
}
}
}
return array;
}
// Allocates a TFLiteArray and initializes it with the given array.
//
// `values` is expected to holds `size` elements.
template <class T, size_t N>
TfLiteArrayUniquePtr<T> BuildTfLiteArray(const T (&values)[N]) {
return BuildTfLiteArray<T>(static_cast<int>(N), values);
}
// Allocates a TFLiteArray and initializes it with the given values.
template <class T>
TfLiteArrayUniquePtr<T> BuildTfLiteArray(const std::vector<T>& values) {
return BuildTfLiteArray(static_cast<int>(values.size()), values.data());
//
// This uses SFINAE to only be picked up by for types that implement `data()`
// and `size()` member functions. We cannot reuse detection facilities provided
// by Abseil in this code.
//
// To conform with the other overloads, we allow specifying the type of the
// array as well as deducing it from the container.
template <
class T = void, class Container,
class ElementType =
std::decay_t<decltype(*std::declval<Container>().data())>,
class SizeType = std::decay_t<decltype(std::declval<Container>().size())>,
class Type =
std::conditional_t<std::is_same<T, void>::value, ElementType, T>>
TfLiteArrayUniquePtr<Type> BuildTfLiteArray(const Container& values) {
return BuildTfLiteArray<Type>(static_cast<int>(values.size()), values.data());
}
// Allocates a TFLiteArray and initializes it with the given values.
......
......@@ -13,14 +13,16 @@ See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// This file declares types used by the pure C inference API defined in c_api.h,
// some of which are also used in the C++ and C kernel and interpreter APIs.
/// WARNING: Users of TensorFlow Lite should not include this file directly,
/// but should instead include
/// "third_party/tensorflow/lite/c/c_api_types.h".
/// Only the TensorFlow Lite implementation itself should include this
/// file directly.
/// This file declares types used by the pure C inference API defined in
/// c_api.h, some of which are also used in the C++ and C kernel and interpreter
/// APIs.
// WARNING: Users of TensorFlow Lite should not include this file directly,
// but should instead include
// "third_party/tensorflow/lite/c/c_api_types.h".
// Only the TensorFlow Lite implementation itself should include this
// file directly.
// IWYU pragma: private, include "third_party/tensorflow/lite/c/c_api_types.h"
#ifndef TENSORFLOW_LITE_CORE_C_C_API_TYPES_H_
......@@ -32,6 +34,10 @@ limitations under the License.
extern "C" {
#endif
/** \addtogroup c_api_types tensorflow/lite/c/c_api_types.h
* @{
*/
// Define TFL_CAPI_EXPORT macro to export a function properly with a shared
// library.
#ifdef SWIG
......@@ -50,50 +56,51 @@ extern "C" {
#endif // _WIN32
#endif // SWIG
// Note that new error status values may be added in future in order to
// indicate more fine-grained internal states, therefore, applications should
// not rely on status values being members of the enum.
/// Note that new error status values may be added in future in order to
/// indicate more fine-grained internal states, therefore, applications should
/// not rely on status values being members of the enum.
typedef enum TfLiteStatus {
/// Success
kTfLiteOk = 0,
// Generally referring to an error in the runtime (i.e. interpreter)
/// Generally referring to an error in the runtime (i.e. interpreter)
kTfLiteError = 1,
// Generally referring to an error from a TfLiteDelegate itself.
/// Generally referring to an error from a TfLiteDelegate itself.
kTfLiteDelegateError = 2,
// Generally referring to an error in applying a delegate due to
// incompatibility between runtime and delegate, e.g., this error is returned
// when trying to apply a TF Lite delegate onto a model graph that's already
// immutable.
/// Generally referring to an error in applying a delegate due to
/// incompatibility between runtime and delegate, e.g., this error is returned
/// when trying to apply a TF Lite delegate onto a model graph that's already
/// immutable.
kTfLiteApplicationError = 3,
// Generally referring to serialized delegate data not being found.
// See tflite::delegates::Serialization.
/// Generally referring to serialized delegate data not being found.
/// See tflite::delegates::Serialization.
kTfLiteDelegateDataNotFound = 4,
// Generally referring to data-writing issues in delegate serialization.
// See tflite::delegates::Serialization.
/// Generally referring to data-writing issues in delegate serialization.
/// See tflite::delegates::Serialization.
kTfLiteDelegateDataWriteError = 5,
// Generally referring to data-reading issues in delegate serialization.
// See tflite::delegates::Serialization.
/// Generally referring to data-reading issues in delegate serialization.
/// See tflite::delegates::Serialization.
kTfLiteDelegateDataReadError = 6,
// Generally referring to issues when the TF Lite model has ops that cannot be
// resolved at runtime. This could happen when the specific op is not
// registered or built with the TF Lite framework.
/// Generally referring to issues when the TF Lite model has ops that cannot
/// be resolved at runtime. This could happen when the specific op is not
/// registered or built with the TF Lite framework.
kTfLiteUnresolvedOps = 7,
// Generally referring to invocation cancelled by the user.
// See `interpreter::Cancel`.
/// Generally referring to invocation cancelled by the user.
/// See `interpreter::Cancel`.
// TODO(b/194915839): Implement `interpreter::Cancel`.
// TODO(b/250636993): Cancellation triggered by `SetCancellationFunction`
// should also return this status code.
kTfLiteCancelled = 8,
} TfLiteStatus;
// Types supported by tensor
/// Types supported by tensor
typedef enum {
kTfLiteNoType = 0,
kTfLiteFloat32 = 1,
......@@ -116,12 +123,12 @@ typedef enum {
kTfLiteInt4 = 18,
} TfLiteType;
// Legacy. Will be deprecated in favor of TfLiteAffineQuantization.
// If per-layer quantization is specified this field will still be populated in
// addition to TfLiteAffineQuantization.
// Parameters for asymmetric quantization. Quantized values can be converted
// back to float using:
// real_value = scale * (quantized_value - zero_point)
/// Legacy. Will be deprecated in favor of TfLiteAffineQuantization.
/// If per-layer quantization is specified this field will still be populated in
/// addition to TfLiteAffineQuantization.
/// Parameters for asymmetric quantization. Quantized values can be converted
/// back to float using:
/// real_value = scale * (quantized_value - zero_point)
typedef struct TfLiteQuantizationParams {
float scale;
int32_t zero_point;
......@@ -130,39 +137,41 @@ typedef struct TfLiteQuantizationParams {
// --------------------------------------------------------------------------
// Opaque types used by c_api.h, c_api_opaque.h and common.h.
// TfLiteOpaqueContext is an opaque version of TfLiteContext;
/// TfLiteOpaqueContext is an opaque version of TfLiteContext;
typedef struct TfLiteOpaqueContext TfLiteOpaqueContext;
// TfLiteOpaqueNode is an opaque version of TfLiteNode;
/// TfLiteOpaqueNode is an opaque version of TfLiteNode;
typedef struct TfLiteOpaqueNode TfLiteOpaqueNode;
// TfLiteOpaqueTensor is an opaque version of TfLiteTensor;
/// TfLiteOpaqueTensor is an opaque version of TfLiteTensor;
typedef struct TfLiteOpaqueTensor TfLiteOpaqueTensor;
// TfLiteDelegate: allows delegation of nodes to alternative backends.
// Forward declaration of concrete type declared in common.h.
/// TfLiteDelegate: allows delegation of nodes to alternative backends.
/// Forward declaration of concrete type declared in common.h.
typedef struct TfLiteDelegate TfLiteDelegate;
// TfLiteOpaqueDelegateStruct: unconditionally opaque version of
// TfLiteDelegate; allows delegation of nodes to alternative backends.
//
// This is an abstract type that is intended to have the same
// role as TfLiteDelegate, but without exposing the implementation
// details of how delegates are implemented.
// WARNING: This is an experimental type and subject to change.
/// TfLiteOpaqueDelegateStruct: unconditionally opaque version of
/// TfLiteDelegate; allows delegation of nodes to alternative backends.
///
/// This is an abstract type that is intended to have the same
/// role as TfLiteDelegate, but without exposing the implementation
/// details of how delegates are implemented.
/// WARNING: This is an experimental type and subject to change.
typedef struct TfLiteOpaqueDelegateStruct TfLiteOpaqueDelegateStruct;
// TfLiteOpaqueDelegate: conditionally opaque version of
// TfLiteDelegate; allows delegation of nodes to alternative backends.
// For TF Lite in Play Services, this is an opaque type,
// but for regular TF Lite, this is just a typedef for TfLiteDelegate.
// WARNING: This is an experimental type and subject to change.
/// TfLiteOpaqueDelegate: conditionally opaque version of
/// TfLiteDelegate; allows delegation of nodes to alternative backends.
/// For TF Lite in Play Services, this is an opaque type,
/// but for regular TF Lite, this is just a typedef for TfLiteDelegate.
/// WARNING: This is an experimental type and subject to change.
#if TFLITE_WITH_STABLE_ABI || TFLITE_USE_OPAQUE_DELEGATE
typedef TfLiteOpaqueDelegateStruct TfLiteOpaqueDelegate;
#else
typedef TfLiteDelegate TfLiteOpaqueDelegate;
#endif
/** @} */
#ifdef __cplusplus
} // extern C
#endif
......
......@@ -262,10 +262,19 @@ TfLiteStatus TfLiteTensorCopy(const TfLiteTensor* src, TfLiteTensor* dst) {
if (dst->dims) TfLiteIntArrayFree(dst->dims);
dst->dims = TfLiteIntArrayCopy(src->dims);
if (src->allocation_type == kTfLiteVariantObject) {
if (dst->allocation_type != kTfLiteVariantObject) return kTfLiteError;
// An edge case exists in control flow ops when they copy inputs to outputs
// before invoking any body, in this case the `dst` will not have its
// `allocation_type` set properly, so we handle here for now.
if (dst->allocation_type != kTfLiteVariantObject) {
TfLiteTensorDataFree(dst);
dst->allocation_type = kTfLiteVariantObject;
}
auto* dst_vd = static_cast<VariantData*>(dst->data.data);
auto* src_vd = static_cast<VariantData*>(src->data.data);
// Implicitly casted via return from `CloneTo`. Don't need static cast here.
// `CloneTo` will handle the case when `dst_vd` is nullptr, so it is safe
// to `CloneTo` something which was "freed". Also, returning from `CloneTo`
// will implicitly cast to `VariantData`; don't need static cast here.
dst->data.data = src_vd->CloneTo(dst_vd);
} else {
memcpy(dst->data.raw, src->data.raw, src->bytes);
......
......@@ -860,7 +860,7 @@ typedef struct TfLiteContext {
// }
//
// NOTE: The context owns the memory referenced by partition_params_array. It
// will be cleared with another call to PreviewDelegateParitioning, or after
// will be cleared with another call to PreviewDelegatePartitioning, or after
// TfLiteDelegateParams::Prepare returns.
//
// WARNING: This is an experimental interface that is subject to change.
......@@ -920,6 +920,64 @@ typedef struct TfLiteContext {
// field is the exactly the same as with `TfLiteRegistration`.
typedef struct TfLiteRegistrationExternal TfLiteRegistrationExternal;
// The valid values of the `inplace_operator` field in `TfLiteRegistration`.
// This allow an op to signal to the runtime that the same data pointer
// may be passed as an input and output without impacting the result.
// This does not mean that the memory can safely be reused, it is up to the
// runtime to determine this, e.g. if another op consumes the same input or not
// or if an input tensor has sufficient memory allocated to store the output
// data.
//
// Setting these flags authorizes the runtime to set the data pointers of an
// input and output tensor to the same value. In such cases, the memory required
// by the output must be less than or equal to that required by the shared
// input, never greater. If kTfLiteInplaceOpDataUnmodified is set, then the
// runtime can share the same input tensor with multiple operator's outputs,
// provided that kTfLiteInplaceOpDataUnmodified is set for all of them.
// Otherwise, if an input tensor is consumed by multiple operators, it may only
// be shared with the operator which is the last to consume it.
//
// Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
typedef enum {
// The default value. This indicates that the same data pointer cannot safely
// be passed as an op's input and output.
kTfLiteInplaceOpNone = 0,
// This indicates that an op's first output's data is identical to its first
// input's data, for example Reshape.
kTfLiteInplaceOpDataUnmodified = 1,
// Setting kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput means
// that InputN may be shared with OutputN instead of with the first output.
// This flag requires one or more of kTfLiteInplaceOpInputNShared to be set.
kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput = 2,
// kTfLiteInplaceOpInputNShared indicates that it is safe for an op to share
// InputN's data pointer with an output tensor. If
// kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set then
// kTfLiteInplaceOpInputNShared indicates that InputN may be shared
// with OutputN, otherwise kTfLiteInplaceOpInputNShared indicates that InputN
// may be shared with the first output.
//
// Indicates that an op's first input may be shared with the first output
// tensor. kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput has
// no impact on the behavior allowed by this flag.
kTfLiteInplaceOpInput0Shared = 4,
// Indicates that an op's second input may be shared with the first output
// if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set
// or second output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput
// is set.
kTfLiteInplaceOpInput1Shared = 8,
// Indicates that an op's third input may be shared with the first output
// if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set
// or third output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is
// set.
kTfLiteInplaceOpInput2Shared = 16,
// Placeholder to ensure that enum can hold 64 bit values to accommodate
// future fields.
kTfLiteInplaceOpMaxValue = UINT64_MAX,
} TfLiteInPlaceOp;
// The number of shareable inputs supported.
static const int kTfLiteMaxSharableOpInputs = 3;
typedef struct TfLiteRegistration {
// Initializes the op from serialized data.
// Called only *once* for the lifetime of the op, so any one-time allocations
......@@ -1000,8 +1058,37 @@ typedef struct TfLiteRegistration {
// does not support asynchronous execution for this `node`.
struct TfLiteAsyncKernel* (*async_kernel)(TfLiteContext* context,
TfLiteNode* node);
// Indicates if an operator's output may safely overwrite its inputs.
// See the comments in `TfLiteInPlaceOp`.
uint64_t inplace_operator;
} TfLiteRegistration;
/// \private
// Old version of `TfLiteRegistration` to maintain binary backward
// compatibility.
// The legacy registration type must be a POD struct type whose field types must
// be a prefix of the field types in TfLiteRegistration, and offset of the first
// field in TfLiteRegistration that is not present in the legacy registration
// type must be greater than or equal to the size of the legacy registration
// type.
// WARNING: This structure is deprecated / not an official part of the
// API. It should be only used for binary backward compatibility.
typedef struct TfLiteRegistration_V3 {
void* (*init)(TfLiteContext* context, const char* buffer, size_t length);
void (*free)(TfLiteContext* context, void* buffer);
TfLiteStatus (*prepare)(TfLiteContext* context, TfLiteNode* node);
TfLiteStatus (*invoke)(TfLiteContext* context, TfLiteNode* node);
const char* (*profiling_string)(const TfLiteContext* context,
const TfLiteNode* node);
int32_t builtin_code;
const char* custom_name;
int version;
TfLiteRegistrationExternal* registration_external;
struct TfLiteAsyncKernel* (*async_kernel)(TfLiteContext* context,
TfLiteNode* node);
} TfLiteRegistration_V3;
/// \private
// Old version of `TfLiteRegistration` to maintain binary backward
// compatibility.
......@@ -1314,9 +1401,9 @@ class AbstractVariantData : public VariantData {
// If the output is still allocated, then its object may still be
// in its life time and the destructor must be called before re-using the
// buffer.
// This may actual have a non-negligle effect on perfomance if the
// This may actual have a non-negligible effect on performance if the
// destructor is complex. A future iteration may
// introduce copy or move asignment semantics, allowing for the
// introduce copy or move assignment semantics, allowing for the
// underlying implementation to optimize for this case.
auto* derived = static_cast<ErasedDerived*>(maybe_alloc);
derived->~ErasedDerived();
......
......@@ -162,7 +162,6 @@
},
"outputs": [],
"source": [
"%tensorflow_version 1.x\n",
"import tensorflow as tf"
]
},
......@@ -420,7 +419,7 @@
},
"outputs": [],
"source": [
"with tf.Session() as sess:\n",
"with tf.compat.v1.Session() as sess:\n",
" float_converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL)\n",
" float_tflite_model = float_converter.convert()\n",
" float_tflite_model_size = open(FLOAT_MODEL_TFLITE, \"wb\").write(float_tflite_model)\n",
......@@ -428,8 +427,8 @@
"\n",
" converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL)\n",
" converter.optimizations = [tf.lite.Optimize.DEFAULT]\n",
" converter.inference_input_type = tf.lite.constants.INT8\n",
" converter.inference_output_type = tf.lite.constants.INT8\n",
" converter.inference_input_type = tf.int8\n",
" converter.inference_output_type = tf.int8\n",
" def representative_dataset_gen():\n",
" for i in range(100):\n",
" data, _ = audio_processor.get_data(1, i*1, model_settings,\n",
......@@ -472,7 +471,7 @@
"def run_tflite_inference(tflite_model_path, model_type=\"Float\"):\n",
" # Load test data\n",
" np.random.seed(0) # set random seed for reproducible test results.\n",
" with tf.Session() as sess:\n",
" with tf.compat.v1.Session() as sess:\n",
" test_data, test_labels = audio_processor.get_data(\n",
" -1, 0, model_settings, BACKGROUND_FREQUENCY, BACKGROUND_VOLUME_RANGE,\n",
" TIME_SHIFT_MS, 'testing', sess)\n",
......
......@@ -48,13 +48,13 @@ $(eval $(call microlite_test,unidirectional_sequence_lstm_test,\
$(TENSORFLOW_ROOT)tensorflow/lite/micro/kernels/testdata/lstm_test_data.cc,\
$(TENSORFLOW_ROOT)tensorflow/lite/micro/kernels/testdata/lstm_test_data.h))
$(eval $(call microlite_test,kernel_fft_test,\
$(eval $(call microlite_test,kernel_signal_fft_test,\
$(TENSORFLOW_ROOT)signal/micro/kernels/fft_test.cc \
$(TENSORFLOW_ROOT)signal/micro/kernels/fft_flexbuffers_generated_data.cc \
$(TENSORFLOW_ROOT)signal/testdata/fft_test_data.cc, \
$(TENSORFLOW_ROOT)signal/micro/kernels/fft_flexbuffers_generated_data.h))
$(eval $(call microlite_test,kernel_window_test,\
$(eval $(call microlite_test,kernel_signal_window_test,\
$(TENSORFLOW_ROOT)signal/micro/kernels/window_test.cc \
$(TENSORFLOW_ROOT)signal/micro/kernels/window_flexbuffers_generated_data.cc, \
$(TENSORFLOW_ROOT)signal/micro/kernels/window_flexbuffers_generated_data.h))
......
/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.
/* Copyright 2023 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
......@@ -31,6 +31,8 @@ namespace tflite {
namespace {
constexpr int kCondTensor = 0;
struct OpData {
int then_subgraph_index;
int else_subgraph_index;
......@@ -52,7 +54,8 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
// The first input is the condition.
tflite::MicroContext* micro_context = tflite::GetMicroContext(context);
TfLiteTensor* cond = micro_context->AllocateTempInputTensor(node, 0);
TfLiteTensor* cond =
micro_context->AllocateTempInputTensor(node, kCondTensor);
TF_LITE_ENSURE(context, cond != nullptr);
TF_LITE_ENSURE_EQ(context, cond->type, kTfLiteBool);
......@@ -86,11 +89,10 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
const OpData* op_data = reinterpret_cast<OpData*>(node->user_data);
tflite::MicroContext* micro_context = tflite::GetMicroContext(context);
TfLiteTensor* cond = micro_context->AllocateTempInputTensor(node, 0);
const TfLiteEvalTensor* cond =
tflite::micro::GetEvalInput(context, node, kCondTensor);
TF_LITE_ENSURE(context, cond != nullptr);
bool cond_value = cond->data.b[0];
micro_context->DeallocateTempTfLiteTensor(cond);
const bool cond_value = cond->data.b[0];
MicroGraph* graph_info = &micro_context->graph();
// Currently we copy the input / output between the subgraphs.
......
# RISC-V MCU
This folder contains TFLite kernel operations optimized for RISC-V micro
controllers.
It is designed to be portable even to 'bare metal', so it follows the same
design goals as the micro experimental port.
/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// TODO(b/121324430): Add test for DebugLog functions
// TODO(b/121275099): Remove dependency on debug_log once the platform supports
// printf
#include <stdio.h>
extern "C" void DebugLog(const char* s) { puts(s); }
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
using sysbus
mach create
# This is a little hack to increase our ram size from 16k to 256k, since some
# tests require a larger ram size. The platform's linker script is also modified
# to account for the larger ram size, see patch_sifive_sdk() in download_and_extract.sh
set platform
"""
using "platforms/cpus/sifive-fe310.repl"
dtim:
size: 0x40000
"""
machine LoadPlatformDescriptionFromString $platform
sysbus Tag <0x10008000 4> "PRCI_HFROSCCFG" 0xFFFFFFFF
sysbus Tag <0x10008008 4> "PRCI_PLLCFG" 0xFFFFFFFF
showAnalyzer uart0 Antmicro.Renode.Analyzers.LoggingUartAnalyzer
uart0 CreateFileBackend $logfile true
cpu PerformanceInMips 320
......@@ -314,9 +314,6 @@ $(TENSORFLOW_ROOT)tensorflow/lite/micro/memory_planner/non_persistent_buffer_pla
MICROLITE_CC_KERNEL_SRCS := \
$(TENSORFLOW_ROOT)signal/micro/kernels/rfft.cc \
$(TENSORFLOW_ROOT)signal/micro/kernels/window.cc \
$(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/kiss_fft_float.cc \
$(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/kiss_fft_int16.cc \
$(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/kiss_fft_int32.cc \
$(TENSORFLOW_ROOT)signal/src/rfft_float.cc \
$(TENSORFLOW_ROOT)signal/src/rfft_int16.cc \
$(TENSORFLOW_ROOT)signal/src/rfft_int32.cc \
......@@ -456,7 +453,6 @@ $(TFL_CC_SRCS)
MICROLITE_CC_HDRS := \
$(wildcard $(TENSORFLOW_ROOT)signal/micro/kernels/*.h) \
$(wildcard $(TENSORFLOW_ROOT)signal/src/*.h) \
$(wildcard $(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/*.h) \
$(wildcard $(TENSORFLOW_ROOT)tensorflow/lite/micro/*.h) \
$(wildcard $(TENSORFLOW_ROOT)tensorflow/lite/micro/benchmarks/*model_data.h) \
$(wildcard $(TENSORFLOW_ROOT)tensorflow/lite/micro/kernels/*.h) \
......
# TODO(b/288938993): de-dupe internal wrapper directory and remove this
MICROLITE_CC_KERNEL_SRCS += \
$(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/kiss_fft_float.cc \
$(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/kiss_fft_int16.cc \
$(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/kiss_fft_int32.cc
MICROLITE_CC_HDRS += \
$(wildcard $(TENSORFLOW_ROOT)signal/src/kiss_fft_wrappers/*.h) \
......@@ -53,9 +53,7 @@ PLATFORM_ARGS = \
-DPTHREAD_STUBS \
-DUSE_PREALLOCATED_BUFFER \
-D_HAS_C9X \
-DTF_LITE_USE_CTIME \
-MMD \
-DHEXAGON \
-Wall \
-Wextra \
-Wno-missing-field-initializers \
......