- 15 7月, 2021 2 次提交
-
-
由 Advait Jain 提交于
BUG=http://b/193437031
-
由 Advait Jain 提交于
* Have a TFLM-specific version of lite/kernels/op_macros.h * After this change, we can remove the TFLM-specific code from upstream Tensorflow. * Both the TfLite and TFLM implementations of op_macros.h will be simplified. BUG=http://b/187728891 NO_CHECK_TFLITE_FILE=lite/kernels/kernel_util.cc will be modified in upstream as well * Fix target-specific makefile build.
-
- 14 7月, 2021 3 次提交
-
-
由 Advait Jain 提交于
The corresponding internal change is http://cl/384575105 BUG=http://b/188064725
-
由 TFLM-bot 提交于
-
由 cad-audio 提交于
* XTENSA_CODE_REFACTOR: depthwise_conv Refactored the xtensa hifi optimized code for depthwise_conv operator. * Small fixes for the CI. Co-authored-by: Nbhanu prakash bandaru venkata <bhanup@cadence.com> Co-authored-by: NAdvait Jain <advaitjain@google.com>
-
- 13 7月, 2021 5 次提交
-
-
由 Advait Jain 提交于
-
由 Advait Jain 提交于
* modify a tflite file in the tflite-micro repository. * fix the path * Update workflow logic to complete workflow without error with NO_CHECK_TFLITE_FILES * revert change to the tflite files.
-
由 jwithers 提交于
* fail PRs that overwrite a set of files from upstream * Change some names, modify Python script logic etc. * small fixes. * Use more restrictive TFLM-bot token. Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com> Co-authored-by: NAdvait Jain <advaitjain@google.com>
-
由 Advait Jain 提交于
* Force FLATBUFFERS_LOCALE_INDEPENDENT=0 With this change, we no longer need a #define in the sources that directly or indrectly include flatbuffers/base.h BUG=http://b/193264978 * Fix xtensa script.
-
由 Alan Green 提交于
* micro_profiler.cc: add CSV output Adds a method to log output in CSV form, suitable for input to a spreadsheet. * micro_profiler.cc: remove ms from CSV output Removes milliseconds from CSV output. Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
- 10 7月, 2021 4 次提交
-
-
由 Shlomi Regev 提交于
* Check for the case where the Operators vector in a tflite model is missing Some of our tools remove the vector if it's empty (i.e. the subgraph doesn't contain ops). The check is added until the tools are fixed. BUG=http://b/192589496 * Move NumSubgraphOperators to flatbuffer_utils.h/cc * Fix the CI errors. * Remove `#define FLATBUFFERS_LOCALE_INDEPENDENT 0` We will handle that via the Makefile instead. * Add -DFLATBUFFERS_LOCALE_INDEPENDENT=0 to makefile. * remove references to flatbuffer from kernel_utils.cc/h * Fix the build. * run buildifier Co-authored-by: NAdvait Jain <advaitjain@google.com> Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 yair_ehrenwald 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 yair_ehrenwald 提交于
* fixed scratch buffer extern decleration in ceva_common.h * Add optimized depthwise conv BX1 and SP500 * fixed year in copyright notice Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Jens Elofsson 提交于
-
- 09 7月, 2021 5 次提交
-
-
由 Advait Jain 提交于
-
由 Shlomi Regev 提交于
- Save ~4KB of RAM by implementing a lightweight version of the vector, which only accesses integer values, without implicit conversion. - Change Micro kernels to access flexbuffers as vectors instead of maps, which is recommended by the flexbuffers doc for efficiency. Since the values in the vector are ordered alphabetically by their keys, the kernels can access them by index instead. - Revert detection_postprocess to the native flexbuffers API. The LiteVector API doesn't support IsNull() and I prefer not to support two custom flexbuffer APIs. Co-authored-by: NNat Jeffries <natmjeffries@gmail.com>
-
由 Advait Jain 提交于
BUG=http://b/192614484
-
由 Advait Jain 提交于
-
由 Jens Elofsson 提交于
* Remove MICROLITE_CC_KERNELS_SRCS from the MICROLITE_CC_SRCS list. Stop the kernel sources from being compiled into both the core objects and kernel objects. * Add kernel sources to list_library_sources * Fix project generation presubmit Co-authored-by: NNat Jeffries <natmjeffries@gmail.com>
-
- 08 7月, 2021 2 次提交
-
-
由 Måns Nilsson 提交于
Change-Id: I23e7425445fd8fdb37be0ebb43cb9c8529ceb368 Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 yair_ehrenwald 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
- 07 7月, 2021 1 次提交
-
-
由 TFLM-bot 提交于
-
- 02 7月, 2021 1 次提交
-
-
由 Nat Jeffries 提交于
Previously, some sources were built before THIRD_PARTY_TARGETS was downloaded, meaning dependencies in those sources on flatbuffers, gemmlowp, etc could be missed. An alternative to this "dependency for every source" approach could be to add $(THIRD_PARTY_TARGETS) as the first dependency in $(MICROLITE_LIB_PATH), but this relies on Make dependency ordering which is ill-advised Background: https://stackoverflow.com/questions/9159960/order-of-processing-components-in-makefile
-
- 30 6月, 2021 2 次提交
-
-
由 Nat Jeffries 提交于
Lots of large models have >50 ops, so upping the max number to 1024 seems reasonable. Most TFLM models should not exceed 1024 ops.
-
由 Nat Jeffries 提交于
-
- 29 6月, 2021 1 次提交
-
-
由 TFLM-bot 提交于
-
- 26 6月, 2021 4 次提交
-
-
由 Nat Jeffries 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Nat Jeffries 提交于
-
由 Advait Jain 提交于
Verified that the following commands work: ``` make -f tensorflow/lite/micro/tools/make/Makefile TARGET=hexagon OPTIMIZED_KERNEL_DIR=hexagon OPTIMIZED_KERNEL_DIR_PREFIX=third_party HEXAGON_TFLM_LIB=~/Qualcomm/tflm_google/hexagon_tflm_core.a test_kernel_fully_connected_test make -f tensorflow/lite/micro/tools/make/Makefile TARGET=hexagon OPTIMIZED_KERNEL_DIR=hexagon OPTIMIZED_KERNEL_DIR_PREFIX=third_party HEXAGON_TFLM_LIB=~/Qualcomm/tflm_google/hexagon_tflm_core.a test_kernel_svdf_test make -f tensorflow/lite/micro/tools/make/Makefile TARGET=hexagon OPTIMIZED_KERNEL_DIR=hexagon OPTIMIZED_KERNEL_DIR_PREFIX=third_party HEXAGON_TFLM_LIB=~/Qualcomm/tflm_google/hexagon_tflm_core.a run_keyword_benchmark ``` BUG=http://b/174781826
-
由 Advait Jain 提交于
The TFLM Makefile itself ensures that OPTIMIZED_KERNEL_DIR is a valid path. However, for some special cases (such as http://cl/379964344), we would like to allow the specialization to gracefully handle a directory that does not exist in the tree. BUG=see use-case from http://cl/379964344
-
- 25 6月, 2021 2 次提交
-
-
由 TFLM-bot 提交于
-
由 Shlomi Regev 提交于
- Add external lib references to Hexagon FFT functions
-
- 24 6月, 2021 2 次提交
-
-
由 TFLM-bot 提交于
-
由 Advait Jain 提交于
-
- 23 6月, 2021 4 次提交
-
-
由 Nat Jeffries 提交于
Add support for 8x16 kernels in TFLM. Support 16-bit activations and 8-bit weights for a subset of the TFLM kernels. BUG=http://b/184839019
-
由 Advait Jain 提交于
* Fix the Vision P6 build. Tested the following command: ``` make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=vision_p6 XTENSA_CORE=P6_200528 build -j8 ``` * Fix the Fusion F1 build.
-
由 Advait Jain 提交于
-
由 jwithers 提交于
* check PR description for a BUG= line * Changed file name and added some more documentation. * move the if to be on the steo instead of at the job. This is to allow for the job to succeed (instead of being skipped) when the BUG= text is found. And as a result we can have it be a required status check. Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com> Co-authored-by: NAdvait Jain <advaitjain@google.com>
-
- 22 6月, 2021 2 次提交
-
-
由 Advait Jain 提交于
* share code with the cmsis implementation * groundwork needed to add in Xtensa implementation BUG=https://github.com/tensorflow/tflite-micro/issues/205
-
由 Ryan Kuester 提交于
* Remove lite-specific code from copy of SPACE_TO_DEPTH Remove the bulk of lite-specific code from the micro implementation of operator SPACE_TO_DEPTH. - Flatten namespace - Don't resize output tensors - Remove type other than int8 and float32 - Don't use gtest * Port operator SPACE_TO_DEPTH from lite Port the SPACE_TO_DEPTH operator from lite to micro. Add the operator and test to the build. Co-authored-by: NPete Warden <pete@petewarden.com>
-