- 21 7月, 2021 3 次提交
-
-
由 Advait Jain 提交于
* Remove distinction between fusion_f1 and hifi4. Hifi4, Fusion F1 and Hifi3Z all use the same underlying xannlib hifi4 optimizations and with this change we can specify TARGET_ARCH=hifi4 for each of these cores. BUG=http://b/194225949 * Fix the build
-
由 Kun Lu 提交于
Use base class version of AllocateFromTail() instead of derived class version in FlatBufferVectorToTfLiteTypeArray() to solve the issue Signed-off-by: NKun-Lu <kun.lu@ibm.com>
-
由 Artem Tsvetkov 提交于
-
- 20 7月, 2021 1 次提交
-
-
由 Michael O'Cleirigh 提交于
I copied the sparkfun-edge action and then adjusted to call the esp32 ci test script. Changed the job name from cortex_m to esp32. Inserted the Readme entry at the alphabetically appropriate index. Signed-off-by: NMichael O'Cleirigh <michael.ocleirigh@gmail.com> Co-authored-by: NMichael O'Cleirigh <michael.ocleirigh@gmail.com>
-
- 18 7月, 2021 1 次提交
-
-
由 TFLM-bot 提交于
-
- 17 7月, 2021 3 次提交
-
-
由 Advait Jain 提交于
-
由 cad-audio 提交于
* Cadence HiFi 5 Neural Network Library v1.6.0 updated the download script to use hifi5 v1.6.0 archive. depthwise_conv kernel is updated and patch is not required now. quantize optimized kernel is renamed in the v1.6.0, updated with new kernel name. * Fix formatting. Co-authored-by: Nbhanu prakash bandaru venkata <bhanup@cadence.com> Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com> Co-authored-by: NAdvait Jain <advaitjain@google.com>
-
由 Artem Tsvetkov 提交于
* Updated utility functions to work with embARC MLI Library 2.0 * Updated copyrights in several files. * Minor fix for mli_tf_utils.h * Update mli_tf_utils.h
-
- 16 7月, 2021 3 次提交
-
-
由 Artem Tsvetkov 提交于
* Updated pooling kernel to work with embARC MLI Library 2.0 * Update pooling.cc copyright.
-
由 Artem Tsvetkov 提交于
* Updated fully_connected kernel to work with embARC MLI Library 2.0 * Update fully_connected.cc * Minor fix for fully_connected activation functions.
-
由 Artem Tsvetkov 提交于
* Updated depthwise_conv kernel to work with embARC MLI Library 2.0 * Update depthwise_conv.cc * Minor fix for depthwise conv activation functions.
-
- 15 7月, 2021 3 次提交
-
-
由 Artem Tsvetkov 提交于
* Updated convolution kernel to work with embARC MLI Library 2.0 * Update conv.cc * Minor fix for conv2d activation functions.
-
由 Advait Jain 提交于
BUG=http://b/193437031
-
由 Advait Jain 提交于
* Have a TFLM-specific version of lite/kernels/op_macros.h * After this change, we can remove the TFLM-specific code from upstream Tensorflow. * Both the TfLite and TFLM implementations of op_macros.h will be simplified. BUG=http://b/187728891 NO_CHECK_TFLITE_FILE=lite/kernels/kernel_util.cc will be modified in upstream as well * Fix target-specific makefile build.
-
- 14 7月, 2021 3 次提交
-
-
由 Advait Jain 提交于
The corresponding internal change is http://cl/384575105 BUG=http://b/188064725
-
由 TFLM-bot 提交于
-
由 cad-audio 提交于
* XTENSA_CODE_REFACTOR: depthwise_conv Refactored the xtensa hifi optimized code for depthwise_conv operator. * Small fixes for the CI. Co-authored-by: Nbhanu prakash bandaru venkata <bhanup@cadence.com> Co-authored-by: NAdvait Jain <advaitjain@google.com>
-
- 13 7月, 2021 5 次提交
-
-
由 Advait Jain 提交于
-
由 Advait Jain 提交于
* modify a tflite file in the tflite-micro repository. * fix the path * Update workflow logic to complete workflow without error with NO_CHECK_TFLITE_FILES * revert change to the tflite files.
-
由 jwithers 提交于
* fail PRs that overwrite a set of files from upstream * Change some names, modify Python script logic etc. * small fixes. * Use more restrictive TFLM-bot token. Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com> Co-authored-by: NAdvait Jain <advaitjain@google.com>
-
由 Advait Jain 提交于
* Force FLATBUFFERS_LOCALE_INDEPENDENT=0 With this change, we no longer need a #define in the sources that directly or indrectly include flatbuffers/base.h BUG=http://b/193264978 * Fix xtensa script.
-
由 Alan Green 提交于
* micro_profiler.cc: add CSV output Adds a method to log output in CSV form, suitable for input to a spreadsheet. * micro_profiler.cc: remove ms from CSV output Removes milliseconds from CSV output. Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
- 10 7月, 2021 4 次提交
-
-
由 Shlomi Regev 提交于
* Check for the case where the Operators vector in a tflite model is missing Some of our tools remove the vector if it's empty (i.e. the subgraph doesn't contain ops). The check is added until the tools are fixed. BUG=http://b/192589496 * Move NumSubgraphOperators to flatbuffer_utils.h/cc * Fix the CI errors. * Remove `#define FLATBUFFERS_LOCALE_INDEPENDENT 0` We will handle that via the Makefile instead. * Add -DFLATBUFFERS_LOCALE_INDEPENDENT=0 to makefile. * remove references to flatbuffer from kernel_utils.cc/h * Fix the build. * run buildifier Co-authored-by: NAdvait Jain <advaitjain@google.com> Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 yair_ehrenwald 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 yair_ehrenwald 提交于
* fixed scratch buffer extern decleration in ceva_common.h * Add optimized depthwise conv BX1 and SP500 * fixed year in copyright notice Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Jens Elofsson 提交于
-
- 09 7月, 2021 5 次提交
-
-
由 Advait Jain 提交于
-
由 Shlomi Regev 提交于
- Save ~4KB of RAM by implementing a lightweight version of the vector, which only accesses integer values, without implicit conversion. - Change Micro kernels to access flexbuffers as vectors instead of maps, which is recommended by the flexbuffers doc for efficiency. Since the values in the vector are ordered alphabetically by their keys, the kernels can access them by index instead. - Revert detection_postprocess to the native flexbuffers API. The LiteVector API doesn't support IsNull() and I prefer not to support two custom flexbuffer APIs. Co-authored-by: NNat Jeffries <natmjeffries@gmail.com>
-
由 Advait Jain 提交于
BUG=http://b/192614484
-
由 Advait Jain 提交于
-
由 Jens Elofsson 提交于
* Remove MICROLITE_CC_KERNELS_SRCS from the MICROLITE_CC_SRCS list. Stop the kernel sources from being compiled into both the core objects and kernel objects. * Add kernel sources to list_library_sources * Fix project generation presubmit Co-authored-by: NNat Jeffries <natmjeffries@gmail.com>
-
- 08 7月, 2021 2 次提交
-
-
由 Måns Nilsson 提交于
Change-Id: I23e7425445fd8fdb37be0ebb43cb9c8529ceb368 Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 yair_ehrenwald 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
- 07 7月, 2021 1 次提交
-
-
由 TFLM-bot 提交于
-
- 02 7月, 2021 1 次提交
-
-
由 Nat Jeffries 提交于
Previously, some sources were built before THIRD_PARTY_TARGETS was downloaded, meaning dependencies in those sources on flatbuffers, gemmlowp, etc could be missed. An alternative to this "dependency for every source" approach could be to add $(THIRD_PARTY_TARGETS) as the first dependency in $(MICROLITE_LIB_PATH), but this relies on Make dependency ordering which is ill-advised Background: https://stackoverflow.com/questions/9159960/order-of-processing-components-in-makefile
-
- 30 6月, 2021 2 次提交
-
-
由 Nat Jeffries 提交于
Lots of large models have >50 ops, so upping the max number to 1024 seems reasonable. Most TFLM models should not exceed 1024 ops.
-
由 Nat Jeffries 提交于
-
- 29 6月, 2021 1 次提交
-
-
由 TFLM-bot 提交于
-
- 26 6月, 2021 2 次提交
-
-
由 Nat Jeffries 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Nat Jeffries 提交于
-