- 25 5月, 2022 1 次提交
-
-
由 TFLM-bot 提交于
-
- 19 5月, 2022 2 次提交
-
-
由 Ting Yan 提交于
* explicitly convert to float and change variable names * Change variable name to accurately reflect type
-
由 Trinity Lundgren 提交于
- Issue fixed in 5-17-22 xi_tflmlib_vision_p6 BUG=b/228102789
-
- 18 5月, 2022 2 次提交
-
-
由 deqiangc 提交于
This corresponds to cl/9257066. This allows some variant ADD kernel optimization. BUG=https://b/231368244Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 17 5月, 2022 2 次提交
-
-
由 psharath1 提交于
* BUG= Added Support for ReduceMax on Vision P6 * BUG= Updated for namespace use * BUG= Fixed build issue for non VP6 cores * updated as per the review comments Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Chris Kuo 提交于
-
- 14 5月, 2022 3 次提交
-
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN Adds three specific registrations for softmax kernel: - Pure int8. - Pure int16. - And one with int8 input and int16 output. In order to avoid duplicating code CalculateSoftmaxParams is made public. Change-Id: I51de85e85f3bfb7a2d936593bd6512f263e6e5e4 * Add helper function InitializeLutForInt16 * Do not inline helper function Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 13 5月, 2022 2 次提交
-
-
由 silabs-elsa 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Ting Yan 提交于
-
- 12 5月, 2022 9 次提交
-
-
由 Trinity Lundgren 提交于
BUG=b/228102789 Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 deqiangc 提交于
* Add additional visibility to micro_test BUG=https://b/232264277 * Fix typo Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Trinity Lundgren 提交于
BUG=b/228102789 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Copy squared_diffence* from TFLite verbatim * Remove TFLite specific contents and flatten namespaces * Complete porting squared difference to micro * Updated based on comments Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN fully connected int16 Adds int16 support, int16 specific registration and unit test (ported from TFL) for CMSIS-NN fully connected. Change-Id: I03fed7eef1880c0796785791225e618322c51642 * Add int16 support to fully_connected reference kernel * CMSIS-NN; Add PopulateCommonParams in FC * Fix formatting Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN Adds int8 and int16x8 specific registrations for depthwise conv kernel. Change-Id: I649c65f642777b310318cbb535bec69e41d20314 * CMSIS-NN: Initialize common DW conv parameters in macro * Switch from macro to function * Fix automatic merge issue Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 psharath1 提交于
* Support for Fully Connected on Vision P6 BUG= Added support for Fully Connected operation on vision P6 * Update xtensa.inc BUG= Added Missing file in makefile * Update fully_connected_vision.cc BUG= fixed compilation issue * Update fully_connected_vision.cc BUG=Updated review comments by RJ * Update fully_connected_vision.cc updated review comments * Update fully_connected.cc Updated as per review comments Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 deqiangc 提交于
* Handle the relative path in generate_cc_arrays.py This corresponds to internal cl/447815949. This changes handles the relative input path to generate_cc_arrays.py BUG=https://b/199442906 * Remove obsolete print Co-authored-by: NPauline Sho <psho@google.com>
-
- 11 5月, 2022 3 次提交
-
-
由 Niranjan Yadla 提交于
* Cadence: Pytorch(mobilenet_v2) to tflite quantized Describved two methos to convert from pytorch to tflite 1. Using TinyNN tool 2. Using pytorch to onnx to tf(saved_model) to tflite(float32)/tflite(int8 quantized) * Update pytorch_to_tflite_test.cc * Update pytorch_images_dog_jpg.cc * Remove unnecessary space * Change HIFI4 and HIFI4_INTERNAL #ifndef locations and apply clang-format Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Måns Nilsson 提交于
Change-Id: I5d52a728dbb22a3bf838834948e4f0cf8fa6ba41 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 10 5月, 2022 5 次提交
-
-
由 Michael Brooks 提交于
* Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro * Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro BUG=b/213518048 Co-authored-by: NPauline Sho <psho@google.com>
-
由 Pauline Sho 提交于
* Add support quantize(uint8<->int8) * Add support quantize(uint8<->int8) BUG=b/213518048 * Sync from upstream TF. (#1095) * Automated size log update (#1100) Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automated size log update (#1103) * Improved quantize tests with more meaningful parameters. Tested with converter flags ``` converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.int8 ``` and ``` converter.inference_input_type = tf.int8 converter.inference_output_type = tf.uint8 ``` and verified that the correct quantization was invoked. * Remove extra copy of test and add comments * Replaced logging with MicroPrintf * Fixed MicroPrintf typo * Exclude XTENSA in unit tests Co-authored-by: NTFLM-bot <tflm-github-bot@google.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Pauline Sho 提交于
BUG=b/232005813 Bias tensors can be optional for certain ops. The existing GetTensorData API is modified to assert upon receiving a nullptr, optional tensors should be retrieved via GetOptionalTensorData now. The new API should make it explicit that a tensor is optional and so check its return value. Only reference kernels are updated in this PR. Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Add float tests for LSTM * Format test code properly * Zero state and output buffers * Add test for batch major input/ouput * Add hybrid tests * RISC-V build fails on std::round(), replace with round() * Minor formatting changes suggested by comments * Refactor tests to use test config structs * Remove unused const variable * Put test config data into separate source/header files * Add bazel build file
-
由 psharath1 提交于
* Convolution Padding bug Fix Fixed issue in padding for Convolution. removed dependency on xtos. restructured the code * Update conv_vision.cc BUG= Fixed Formatting issue Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
- 09 5月, 2022 4 次提交
-
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 deqiangc 提交于
Implement INonPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers (#1090) * Implement IPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers Implement IPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers. This is one step towards supporting separate arenas for non-persistent and persistent buffers. BUG=https://b/226971240 * Address review comments Co-authored-by: NPauline Sho <psho@google.com>
-
由 hmogensen-arm 提交于
-
- 06 5月, 2022 3 次提交
-
-
由 TFLM-bot 提交于
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 05 5月, 2022 1 次提交
-
-
由 Ting Yan 提交于
-
- 04 5月, 2022 3 次提交
-
-
由 Steven Toribio 提交于
-
由 deqiangc 提交于
This is the public part of cl/446104622 BUG=https://b/231266362Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Pauline Sho 提交于
* Enable -Wmissing-field-initializers and update all registration references In https://github.com/tensorflow/tflite-micro/pull/1082/, we disabled the compiler error as a workaround for a successful upstream sync from TFLite. This PR re-enables the error after addressing all errors. A new `RegisterOp` API is added, and all prior TfLiteRegistration struct initializations are updated to use the new API. At the minimum, this ensures that such additions in future will only require an update to RegisterOp if TFLM doesn't use that field. BUG=b/230507399 * Added new RegisterOpWithFree for LSTM * Remove LSTM Free * Changed inline to normal function to save code size * Changed inline to normal function to save code size * Replace cmsis_nn static initialization with RegisterOp * Removed extra code * Add dropped namespace in hard_swish
-