- 17 5月, 2022 1 次提交
-
-
由 Chris Kuo 提交于
-
- 14 5月, 2022 3 次提交
-
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN Adds three specific registrations for softmax kernel: - Pure int8. - Pure int16. - And one with int8 input and int16 output. In order to avoid duplicating code CalculateSoftmaxParams is made public. Change-Id: I51de85e85f3bfb7a2d936593bd6512f263e6e5e4 * Add helper function InitializeLutForInt16 * Do not inline helper function Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 13 5月, 2022 2 次提交
-
-
由 silabs-elsa 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Ting Yan 提交于
-
- 12 5月, 2022 9 次提交
-
-
由 Trinity Lundgren 提交于
BUG=b/228102789 Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 deqiangc 提交于
* Add additional visibility to micro_test BUG=https://b/232264277 * Fix typo Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Trinity Lundgren 提交于
BUG=b/228102789 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Copy squared_diffence* from TFLite verbatim * Remove TFLite specific contents and flatten namespaces * Complete porting squared difference to micro * Updated based on comments Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN fully connected int16 Adds int16 support, int16 specific registration and unit test (ported from TFL) for CMSIS-NN fully connected. Change-Id: I03fed7eef1880c0796785791225e618322c51642 * Add int16 support to fully_connected reference kernel * CMSIS-NN; Add PopulateCommonParams in FC * Fix formatting Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN Adds int8 and int16x8 specific registrations for depthwise conv kernel. Change-Id: I649c65f642777b310318cbb535bec69e41d20314 * CMSIS-NN: Initialize common DW conv parameters in macro * Switch from macro to function * Fix automatic merge issue Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 psharath1 提交于
* Support for Fully Connected on Vision P6 BUG= Added support for Fully Connected operation on vision P6 * Update xtensa.inc BUG= Added Missing file in makefile * Update fully_connected_vision.cc BUG= fixed compilation issue * Update fully_connected_vision.cc BUG=Updated review comments by RJ * Update fully_connected_vision.cc updated review comments * Update fully_connected.cc Updated as per review comments Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 deqiangc 提交于
* Handle the relative path in generate_cc_arrays.py This corresponds to internal cl/447815949. This changes handles the relative input path to generate_cc_arrays.py BUG=https://b/199442906 * Remove obsolete print Co-authored-by: NPauline Sho <psho@google.com>
-
- 11 5月, 2022 3 次提交
-
-
由 Niranjan Yadla 提交于
* Cadence: Pytorch(mobilenet_v2) to tflite quantized Describved two methos to convert from pytorch to tflite 1. Using TinyNN tool 2. Using pytorch to onnx to tf(saved_model) to tflite(float32)/tflite(int8 quantized) * Update pytorch_to_tflite_test.cc * Update pytorch_images_dog_jpg.cc * Remove unnecessary space * Change HIFI4 and HIFI4_INTERNAL #ifndef locations and apply clang-format Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Måns Nilsson 提交于
Change-Id: I5d52a728dbb22a3bf838834948e4f0cf8fa6ba41 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 10 5月, 2022 5 次提交
-
-
由 Michael Brooks 提交于
* Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro * Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro BUG=b/213518048 Co-authored-by: NPauline Sho <psho@google.com>
-
由 Pauline Sho 提交于
* Add support quantize(uint8<->int8) * Add support quantize(uint8<->int8) BUG=b/213518048 * Sync from upstream TF. (#1095) * Automated size log update (#1100) Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automated size log update (#1103) * Improved quantize tests with more meaningful parameters. Tested with converter flags ``` converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.int8 ``` and ``` converter.inference_input_type = tf.int8 converter.inference_output_type = tf.uint8 ``` and verified that the correct quantization was invoked. * Remove extra copy of test and add comments * Replaced logging with MicroPrintf * Fixed MicroPrintf typo * Exclude XTENSA in unit tests Co-authored-by: NTFLM-bot <tflm-github-bot@google.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Pauline Sho 提交于
BUG=b/232005813 Bias tensors can be optional for certain ops. The existing GetTensorData API is modified to assert upon receiving a nullptr, optional tensors should be retrieved via GetOptionalTensorData now. The new API should make it explicit that a tensor is optional and so check its return value. Only reference kernels are updated in this PR. Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Add float tests for LSTM * Format test code properly * Zero state and output buffers * Add test for batch major input/ouput * Add hybrid tests * RISC-V build fails on std::round(), replace with round() * Minor formatting changes suggested by comments * Refactor tests to use test config structs * Remove unused const variable * Put test config data into separate source/header files * Add bazel build file
-
由 psharath1 提交于
* Convolution Padding bug Fix Fixed issue in padding for Convolution. removed dependency on xtos. restructured the code * Update conv_vision.cc BUG= Fixed Formatting issue Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
- 09 5月, 2022 4 次提交
-
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 deqiangc 提交于
Implement INonPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers (#1090) * Implement IPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers Implement IPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers. This is one step towards supporting separate arenas for non-persistent and persistent buffers. BUG=https://b/226971240 * Address review comments Co-authored-by: NPauline Sho <psho@google.com>
-
由 hmogensen-arm 提交于
-
- 06 5月, 2022 3 次提交
-
-
由 TFLM-bot 提交于
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 05 5月, 2022 1 次提交
-
-
由 Ting Yan 提交于
-
- 04 5月, 2022 3 次提交
-
-
由 Steven Toribio 提交于
-
由 deqiangc 提交于
This is the public part of cl/446104622 BUG=https://b/231266362Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Pauline Sho 提交于
* Enable -Wmissing-field-initializers and update all registration references In https://github.com/tensorflow/tflite-micro/pull/1082/, we disabled the compiler error as a workaround for a successful upstream sync from TFLite. This PR re-enables the error after addressing all errors. A new `RegisterOp` API is added, and all prior TfLiteRegistration struct initializations are updated to use the new API. At the minimum, this ensures that such additions in future will only require an update to RegisterOp if TFLM doesn't use that field. BUG=b/230507399 * Added new RegisterOpWithFree for LSTM * Remove LSTM Free * Changed inline to normal function to save code size * Changed inline to normal function to save code size * Replace cmsis_nn static initialization with RegisterOp * Removed extra code * Add dropped namespace in hard_swish
-
- 03 5月, 2022 4 次提交
-
-
由 TFLM-bot 提交于
-
由 Fredrik Knutsson 提交于
* Add Arm IP landing page readme Plus re-phrasing and spelling fixes in other Arm readme's Change-Id: I95b43850026b1fa45285fe044cf0ef5c119c114d * Add clarification as per code review comment Change-Id: I650c7ac21f2385339b607f49bf309fc51298de2f * Adding Arm as subchapter. As per review comment. Change-Id: Ida7598a6f561fb25bde968632551890712c7a557 Co-authored-by: NPauline Sho <psho@google.com>
-
由 Trinity Lundgren 提交于
* Add conv kernel test with 3x3 filter, 2x2 stride, and 4x4 EVEN size input (padding 'same'). * Add conv kernel test with 3x3 filter, 2x2 stride, and 5x5 ODD size input (padding 'same'). Both tests pass for the TFLM reference kernel for conv when run with: $ make -f tensorflow/lite/micro/tools/make/Makefile \ test_kernel_conv_test When run for the xtensa kernel with: $ make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa \ TARGET_ARCH=vision_p6 OPTIMIZED_KERNEL_DIR=xtensa \ XTENSA_CORE=${VP6_CORE} test_kernel_conv_test The 4x4 even-size input fails and the 5x5 odd size input passes. Note that the 4x4 test is currently excluded from the Xtensa tests by an include guard. It must be included to reproduce the failure. BUG=b/228102789 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 30 4月, 2022 2 次提交
-
-
由 Ting Yan 提交于
* Correct LSTM test bug with wrong output dimension * Unloop output dimension checks
-
由 Pauline Sho 提交于
* Fix outdated reference to int64 in comment * Add support for conv 16x8 int32 bias Currently, the 16x8 conv op implicitly assumes int64 biases. This behavior ignores the converter option `_experimental_full_integer_quantization_bias_type`. The actual bias size/type can be retrieved from the model via `GetEvalInput()->type`. Supporting int32 instead of int64 is expected to lower latency on average. Verified that bias->type correctly reflects the converter option set during conversion. Replicated basic unit tests for int32 biases. BUG=b/230125424 * Address PR comments * Added missing include header * Revert TFLite changes * Uncompile tests for CMSIS_NN because 32b bias isn't supported there yet Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-