- 07 6月, 2022 1 次提交
-
-
由 Advait Jain 提交于
* Updated from upstream TF * Manually copied changes from http://cl/449226814 and http://cl/449192972 * fix missing include (erroneously removed in previous commit). * fix the hexagon build.
-
- 04 6月, 2022 2 次提交
-
-
由 Pauline Sho 提交于
* First pass working with float input/output * Numpy buffer input works. Got a sine graph for hello world output. Copyright notice * Remove static references * Output tensors work * Output tensors work * bazel test works bazel test tensorflow/lite/micro/tools:interpreter_test --test_output=all * Successfully built extension with Python setuptools cd tensorflow/lite/micro/tools/interpreter_pypi python -m build pip install dist/example_tflm_interpreter_psho-0.0.3-cp39-cp39-linux_x86_64.whl --force-reinstall python tests/test-interpreter.py TestPyPi: pip uninstall example-tflm-interpreter-psho python -m twine upload --repository testpypi dist/example-tflm-interpreter-psho-0.0.3.tar.gz pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple example-tflm-interpreter-psho==0.0.3 python tests/test-interpreter.py * Refactored some * Cleaned for production and tested with conv model * Fixed formatting * Fixed numpy header issue for Docker. Need to fix double registration issue that's somehow only present in Docker. * Test fix with changes in ci.yml * Added sudo * Added nomsan and noasan tags to python test * Added -layering-check, used py_library instead of py_binary to generate conv model, added micro namespace * Removed extra debug code * Added new tests * Changed reinterpret_steal to cast * Addressed PR comments * size log refactor (#1156) Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Addressed PR and review comments * adding documentation to log_binary* workflows. (#1161) * Automated binary size log update (#1160) * Create an allocator to manage persistent arena (#1174) * Create an allocator to manage persistent arena This PR corresponds to internal cl/452380512. Create an allocator to manage persistent arena. This is another step towards the feature of enabling the client to have two separate memory arenas. BUG=https://b/226971240 * Manually patch due to Makefile difference * Fix virtual environment when building Vela for Arm Ethos-U (#1177) Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Removed print statement Co-authored-by: Njwithers <jpwithers@gmail.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: NTFLM-bot <tflm-github-bot@google.com> Co-authored-by: Ndeqiangc <86809673+deqiangc@users.noreply.github.com> Co-authored-by: NMåns Nilsson <mans.nilsson@arm.com>
-
由 Ting Yan 提交于
* Remove unused scratch buffer for integer mode LSTM * Revert changes to lstm_eval Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 03 6月, 2022 3 次提交
-
-
由 psharath1 提交于
* Added Support for INT8 pad BUG= Added support for INT8 Pad Fixed padding issues in conv for larger tiles * Reverted submits to main * Revert "Added Support for INT8 pad" This reverts commit fdc6e1d624609041b8b743563613339948ff70da. * Added Support for INT8 Pad BUG= Added support for INT8 Pad. Fixed padding issue in conv. * TFLITE_DCHECK xiPadGetMemReqd_Context() status Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Måns Nilsson 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 deqiangc 提交于
* Create an allocator to manage persistent arena This PR corresponds to internal cl/452380512. Create an allocator to manage persistent arena. This is another step towards the feature of enabling the client to have two separate memory arenas. BUG=https://b/226971240 * Manually patch due to Makefile difference
-
- 01 6月, 2022 1 次提交
-
-
由 TFLM-bot 提交于
-
- 27 5月, 2022 1 次提交
-
-
由 jwithers 提交于
-
- 26 5月, 2022 3 次提交
-
-
由 jwithers 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Advait Jain 提交于
* Refactor to share code between reference and optimized kernels. The goal is to reduce code duplication, improve readability of the optimized kernels and reduce the maintenance overhead. * Fix bazel build (and hopefully others as well). Co-authored-by: NPauline Sho <psho@google.com>
-
由 TFLM-bot 提交于
-
- 25 5月, 2022 4 次提交
-
-
由 Chris Kuo 提交于
Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Fix create tree script to support third party outside downloads folder. Change-Id: I27c3004d6f7295456c659b3bb98f94a55d95eb1b * Fix formatting Co-authored-by: NPauline Sho <psho@google.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Pauline Sho 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 19 5月, 2022 2 次提交
-
-
由 Ting Yan 提交于
* explicitly convert to float and change variable names * Change variable name to accurately reflect type
-
由 Trinity Lundgren 提交于
- Issue fixed in 5-17-22 xi_tflmlib_vision_p6 BUG=b/228102789
-
- 18 5月, 2022 2 次提交
-
-
由 deqiangc 提交于
This corresponds to cl/9257066. This allows some variant ADD kernel optimization. BUG=https://b/231368244Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 17 5月, 2022 2 次提交
-
-
由 psharath1 提交于
* BUG= Added Support for ReduceMax on Vision P6 * BUG= Updated for namespace use * BUG= Fixed build issue for non VP6 cores * updated as per the review comments Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Chris Kuo 提交于
-
- 14 5月, 2022 3 次提交
-
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN Adds three specific registrations for softmax kernel: - Pure int8. - Pure int16. - And one with int8 input and int16 output. In order to avoid duplicating code CalculateSoftmaxParams is made public. Change-Id: I51de85e85f3bfb7a2d936593bd6512f263e6e5e4 * Add helper function InitializeLutForInt16 * Do not inline helper function Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
-
- 13 5月, 2022 2 次提交
-
-
由 silabs-elsa 提交于
Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
-
由 Ting Yan 提交于
-
- 12 5月, 2022 9 次提交
-
-
由 Trinity Lundgren 提交于
BUG=b/228102789 Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 deqiangc 提交于
* Add additional visibility to micro_test BUG=https://b/232264277 * Fix typo Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Trinity Lundgren 提交于
BUG=b/228102789 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Ting Yan 提交于
* Copy squared_diffence* from TFLite verbatim * Remove TFLite specific contents and flatten namespaces * Complete porting squared difference to micro * Updated based on comments Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN fully connected int16 Adds int16 support, int16 specific registration and unit test (ported from TFL) for CMSIS-NN fully connected. Change-Id: I03fed7eef1880c0796785791225e618322c51642 * Add int16 support to fully_connected reference kernel * CMSIS-NN; Add PopulateCommonParams in FC * Fix formatting Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 Måns Nilsson 提交于
* Quantization specific registration for CMSIS-NN Adds int8 and int16x8 specific registrations for depthwise conv kernel. Change-Id: I649c65f642777b310318cbb535bec69e41d20314 * CMSIS-NN: Initialize common DW conv parameters in macro * Switch from macro to function * Fix automatic merge issue Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 psharath1 提交于
* Support for Fully Connected on Vision P6 BUG= Added support for Fully Connected operation on vision P6 * Update xtensa.inc BUG= Added Missing file in makefile * Update fully_connected_vision.cc BUG= fixed compilation issue * Update fully_connected_vision.cc BUG=Updated review comments by RJ * Update fully_connected_vision.cc updated review comments * Update fully_connected.cc Updated as per review comments Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 deqiangc 提交于
* Handle the relative path in generate_cc_arrays.py This corresponds to internal cl/447815949. This changes handles the relative input path to generate_cc_arrays.py BUG=https://b/199442906 * Remove obsolete print Co-authored-by: NPauline Sho <psho@google.com>
-
- 11 5月, 2022 3 次提交
-
-
由 Niranjan Yadla 提交于
* Cadence: Pytorch(mobilenet_v2) to tflite quantized Describved two methos to convert from pytorch to tflite 1. Using TinyNN tool 2. Using pytorch to onnx to tf(saved_model) to tflite(float32)/tflite(int8 quantized) * Update pytorch_to_tflite_test.cc * Update pytorch_images_dog_jpg.cc * Remove unnecessary space * Change HIFI4 and HIFI4_INTERNAL #ifndef locations and apply clang-format Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-
由 Måns Nilsson 提交于
Change-Id: I5d52a728dbb22a3bf838834948e4f0cf8fa6ba41 Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
由 TFLM-bot 提交于
Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
-
- 10 5月, 2022 2 次提交
-
-
由 Michael Brooks 提交于
* Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro * Add Uint8 Dequantize While TFLM still doesn't allow uint8 quantization (https://github.com/tensorflow/tflite-micro/issues/216), there are use-cases where having uint8 for quantize/dequantize/requantize can enable legacy models. Restore support for uint8 dequantize. Tested: dequantize_test, running TF1.x SSD models on Coral Dev Board Micro BUG=b/213518048 Co-authored-by: NPauline Sho <psho@google.com>
-
由 Pauline Sho 提交于
* Add support quantize(uint8<->int8) * Add support quantize(uint8<->int8) BUG=b/213518048 * Sync from upstream TF. (#1095) * Automated size log update (#1100) Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automated size log update (#1103) * Improved quantize tests with more meaningful parameters. Tested with converter flags ``` converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.int8 ``` and ``` converter.inference_input_type = tf.int8 converter.inference_output_type = tf.uint8 ``` and verified that the correct quantization was invoked. * Remove extra copy of test and add comments * Replaced logging with MicroPrintf * Fixed MicroPrintf typo * Exclude XTENSA in unit tests Co-authored-by: NTFLM-bot <tflm-github-bot@google.com> Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
-