- 12 5月, 2022 1 次提交
-
-
由 deqiangc 提交于
* Handle the relative path in generate_cc_arrays.py This corresponds to internal cl/447815949. This changes handles the relative input path to generate_cc_arrays.py BUG=https://b/199442906 * Remove obsolete print Co-authored-by: NPauline Sho <psho@google.com>
-
- 11 5月, 2022 3 次提交
-
-
由 Niranjan Yadla 提交于
* Cadence: Pytorch(mobilenet_v2) to tflite quantized Describved two methos to convert from pytorch to tflite 1. Using TinyNN tool 2. Using pytorch to onnx to tf(saved_model) to tflite(float32)/tflite(int8 quantized) * Update pytorch_to_tflite_test.cc * Update pytorch_images_dog_jpg.cc * Remove unnecessary space * Change HIFI4 and HIFI4_INTERNAL #ifndef locations and apply clang-format Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Måns Nilsson 提交于
Change-Id: I5d52a728dbb22a3bf838834948e4f0cf8fa6ba41 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 10 5月, 2022 5 次提交
-
-
由 Michael Brooks 提交于
* Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro * Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro BUG=b/213518048 Co-authored-by: NPauline Sho <psho@google.com>
-
由 Pauline Sho 提交于
* Add support quantize(uint8<->int8) * Add support quantize(uint8<->int8) BUG=b/213518048 * Sync from upstream TF. (#1095) * Automated size log update (#1100) Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automated size log update (#1103) * Improved quantize tests with more meaningful parameters. Tested with converter flags ``` converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.int8 ``` and ``` converter.inference_input_type = tf.int8 converter.inference_output_type = tf.uint8 ``` and verified that the correct quantization was invoked. * Remove extra copy of test and add comments * Replaced logging with MicroPrintf * Fixed MicroPrintf typo * Exclude XTENSA in unit tests Co-authored-by: NTFLM-bot <tflm-github-bot@google.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Pauline Sho 提交于
BUG=b/232005813 Bias tensors can be optional for certain ops. The existing GetTensorData API is modified to assert upon receiving a nullptr, optional tensors should be retrieved via GetOptionalTensorData now. The new API should make it explicit that a tensor is optional and so check its return value. Only reference kernels are updated in this PR. Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Add float tests for LSTM * Format test code properly * Zero state and output buffers * Add test for batch major input/ouput * Add hybrid tests * RISC-V build fails on std::round(), replace with round() * Minor formatting changes suggested by comments * Refactor tests to use test config structs * Remove unused const variable * Put test config data into separate source/header files * Add bazel build file
-
由 psharath1 提交于
* Convolution Padding bug Fix Fixed issue in padding for Convolution. removed dependency on xtos. restructured the code * Update conv_vision.cc BUG= Fixed Formatting issue Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
- 09 5月, 2022 4 次提交
-
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 deqiangc 提交于
Implement INonPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers (#1090) * Implement IPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers Implement IPersistentBufferAllocator on an arena that is dedicated for non-persistent buffers. This is one step towards supporting separate arenas for non-persistent and persistent buffers. BUG=https://b/226971240 * Address review comments Co-authored-by: NPauline Sho <psho@google.com>
-
由 hmogensen-arm 提交于
-
- 06 5月, 2022 3 次提交
-
-
由 TFLM-bot 提交于
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 05 5月, 2022 1 次提交
-
-
由 Ting Yan 提交于
-
- 04 5月, 2022 3 次提交
-
-
由 Steven Toribio 提交于
-
由 deqiangc 提交于
This is the public part of cl/446104622 BUG=https://b/231266362Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Pauline Sho 提交于
* Enable -Wmissing-field-initializers and update all registration references In https://github.com/tensorflow/tflite-micro/pull/1082/, we disabled the compiler error as a workaround for a successful upstream sync from TFLite. This PR re-enables the error after addressing all errors. A new `RegisterOp` API is added, and all prior TfLiteRegistration struct initializations are updated to use the new API. At the minimum, this ensures that such additions in future will only require an update to RegisterOp if TFLM doesn't use that field. BUG=b/230507399 * Added new RegisterOpWithFree for LSTM * Remove LSTM Free * Changed inline to normal function to save code size * Changed inline to normal function to save code size * Replace cmsis_nn static initialization with RegisterOp * Removed extra code * Add dropped namespace in hard_swish
-
- 03 5月, 2022 4 次提交
-
-
由 TFLM-bot 提交于
-
由 Fredrik Knutsson 提交于
* Add Arm IP landing page readme Plus re-phrasing and spelling fixes in other Arm readme's Change-Id: I95b43850026b1fa45285fe044cf0ef5c119c114d * Add clarification as per code review comment Change-Id: I650c7ac21f2385339b607f49bf309fc51298de2f * Adding Arm as subchapter. As per review comment. Change-Id: Ida7598a6f561fb25bde968632551890712c7a557 Co-authored-by: NPauline Sho <psho@google.com>
-
由 Trinity Lundgren 提交于
* Add conv kernel test with 3x3 filter, 2x2 stride, and 4x4 EVEN size input (padding 'same'). * Add conv kernel test with 3x3 filter, 2x2 stride, and 5x5 ODD size input (padding 'same'). Both tests pass for the TFLM reference kernel for conv when run with: $ make -f tensorflow/lite/micro/tools/make/Makefile \ test_kernel_conv_test When run for the xtensa kernel with: $ make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa \ TARGET_ARCH=vision_p6 OPTIMIZED_KERNEL_DIR=xtensa \ XTENSA_CORE=${VP6_CORE} test_kernel_conv_test The 4x4 even-size input fails and the 5x5 odd size input passes. Note that the 4x4 test is currently excluded from the Xtensa tests by an include guard. It must be included to reproduce the failure. BUG=b/228102789 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 30 4月, 2022 2 次提交
-
-
由 Ting Yan 提交于
* Correct LSTM test bug with wrong output dimension * Unloop output dimension checks
-
由 Pauline Sho 提交于
* Fix outdated reference to int64 in comment * Add support for conv 16x8 int32 bias Currently, the 16x8 conv op implicitly assumes int64 biases. This behavior ignores the converter option `_experimental_full_integer_quantization_bias_type`. The actual bias size/type can be retrieved from the model via `GetEvalInput()->type`. Supporting int32 instead of int64 is expected to lower latency on average. Verified that bias->type correctly reflects the converter option set during conversion. Replicated basic unit tests for int32 biases. BUG=b/230125424 * Address PR comments * Added missing include header * Revert TFLite changes * Uncompile tests for CMSIS_NN because 32b bias isn't supported there yet Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 29 4月, 2022 1 次提交
-
-
由 TFLM-bot 提交于
-
- 28 4月, 2022 4 次提交
-
-
由 TFLM-bot 提交于
-
由 deqiangc 提交于
Enable five dimension support in concatenation and export max size in runtime_shape.h to public (#1080) * Enable five dimension support in concatenation and export max size in runtime_shape.h to public There are models that use concatenation op to combine 5 dimension tensors and then use reshape to cast it to lower dimension. BUG=https://b/230028089 * Rename kmaxDimension to kMaxSmallSize Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Copy LSTM related files from TFLite with no changes * Format using clang-format * Fix cpplint issues * Add LSTM files to Makefile * Remove unnecessary __restrict__ * Remove use of Eigen library * Remove tensor_utils_test.cc * Remove LSTM test files * Remove inclusion of cpu_backend_context.h * Remove inclusion of kernel_utils.h * Add inclusion of portable_tensor_utils.h * Remove LINT directives * Remove references to CpuBackendContext * Replace TF_LITE_ASSERT with TFLITE_DCHECK * Include vector library explictly * Explicitly specify float * Fix local vairable shaddow issue * Fix Register_UNIDIRECTIONAL_SEQUENCE_LSTM() * Rename included header files using C++ convention * Change int32 to int32_t * Replace vector with local array * Replace std::fill_n() with memset() * Replace std::copy_n() with std::memcpy() * Allocate pre-calculated biases in persistent memory * Change vector scales to pointer, nullptr for now * Replace Get* with AllocateTemp* and DeallocateTemp* * Use AllocateTempIntermediateTensor() to access intermediates * Remove GetTensorScale() * Use eval tensors * Use pointers to buffer instead of TfLiteTensors * Use persistent buffer for row sums * Update temporary tensor enum * Replace scratch_tensor_index with a scratch index array * Remove output resizing and code related to temporaries * Replace temporary tensors with scratch buffers * Use scratch buffer for scales * Flatten namespaces * Change QuantizeMultiplier's third parameter to compile * Explicitly convert from float to double * Fix int/int32_t incompatibility issue * Move tensor_utils.h into micro/kernels/ directory * Check against nullptr before calling GetTensorData() * Add test for lstm_eval.cc * Change int32_t to int to avoid compiler issues * Fix code style issue * Disable lstm_eval tests for xtensa * Change enum liberals from kLstm* to k* * Copy portable_tensor_utils.h into micro/kernels * Initialize activation to correct test failures * Enable test for lstm_eval in BUILD * Remove TODO without bug specified * Replace TF_LITE_KERNEL_LOG() with MicroPrintf() * Remove references to ruy * Create micro's own copy of tensor utils * Replace assert with TFLITE_ASSERT_FALSE * Remove TODO with no bug and change file position * Use TF_LITE_MICRO_* macros for tests * Add nullptr check before deallocate temp * Move Register_UNIDIRECTIONAL_SEQUENCE_LSTM() to flattened namespace * Add unit tests for integer unidirectional sequence LSTM * Format test and explicitly convert float to double * Fix unused variables for xtensa * Fix miscellaneous issues raised by review comments * Fix xtensa build issue Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 嵌入式AIoT 提交于
-
- 27 4月, 2022 2 次提交
- 26 4月, 2022 2 次提交
-
-
由 TFLM-bot 提交于
-
由 deqiangc 提交于
* Move simple_memory_allocator to a new folder * Fix makefile system * Fix format * Fix a typo in Makefile that cause project generation failure Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 25 4月, 2022 1 次提交
-
-
由 TFLM-bot 提交于
-
- 22 4月, 2022 2 次提交
- 21 4月, 2022 2 次提交
-
-
由 jwithers 提交于
Co-authored-by: NPauline Sho <psho@google.com>
-
由 cad-audio 提交于
Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-