1. 12 5月, 2022 1 次提交
  2. 11 5月, 2022 3 次提交
  3. 10 5月, 2022 5 次提交
  4. 09 5月, 2022 4 次提交
  5. 06 5月, 2022 3 次提交
  6. 05 5月, 2022 1 次提交
  7. 04 5月, 2022 3 次提交
  8. 03 5月, 2022 4 次提交
    • T
      Automated size log update (#1096) · 01a58c96
      TFLM-bot 提交于
      01a58c96
    • F
      Add Arm landing page readme (#1034) · 9c279d09
      Fredrik Knutsson 提交于
      * Add Arm IP landing page readme
      
      Plus re-phrasing and spelling fixes in other Arm readme's
      
      Change-Id: I95b43850026b1fa45285fe044cf0ef5c119c114d
      
      * Add clarification as per code review comment
      
      Change-Id: I650c7ac21f2385339b607f49bf309fc51298de2f
      
      * Adding Arm as subchapter. As per review comment.
      
      Change-Id: Ida7598a6f561fb25bde968632551890712c7a557
      Co-authored-by: NPauline Sho <psho@google.com>
      9c279d09
    • T
      Add conv kernel unit tests (#1094) · d0cdc6a8
      Trinity Lundgren 提交于
      * Add conv kernel test with 3x3 filter, 2x2 stride, and 4x4 EVEN size
        input (padding 'same').
      * Add conv kernel test with 3x3 filter, 2x2 stride, and 5x5 ODD size
        input (padding 'same').
      
      Both tests pass for the TFLM reference kernel for conv when run with:
      
      $ make -f tensorflow/lite/micro/tools/make/Makefile \
        test_kernel_conv_test
      
      When run for the xtensa kernel with:
      
      $ make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa \
        TARGET_ARCH=vision_p6 OPTIMIZED_KERNEL_DIR=xtensa \
        XTENSA_CORE=${VP6_CORE} test_kernel_conv_test
      
      The 4x4 even-size input fails and the 5x5 odd size input passes.
      
      Note that the 4x4 test is currently excluded from the Xtensa tests by an
      include guard. It must be included to reproduce the failure.
      
      BUG=b/228102789
      Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
      d0cdc6a8
    • T
      Automated size log update (#1093) · 1c2e49b1
      TFLM-bot 提交于
      1c2e49b1
  9. 30 4月, 2022 2 次提交
    • T
      Fix LSTM test bug with wrong output dimension (#1091) · 10b12233
      Ting Yan 提交于
      * Correct LSTM test bug with wrong output dimension
      
      * Unloop output dimension checks
      10b12233
    • P
      Add support for conv 16x8 int32 bias (#1076) · b52e0f65
      Pauline Sho 提交于
      * Fix outdated reference to int64 in comment
      
      * Add support for conv 16x8 int32 bias
      
      Currently, the 16x8 conv op implicitly assumes int64 biases. This
      behavior ignores the converter option
      `_experimental_full_integer_quantization_bias_type`. The actual bias
      size/type can be retrieved from the model via `GetEvalInput()->type`.
      Supporting int32 instead of int64 is expected to lower latency on
      average.
      
      Verified that bias->type correctly reflects the converter option set
      during conversion. Replicated basic unit tests for int32 biases.
      
      BUG=b/230125424
      
      * Address PR comments
      
      * Added missing include header
      
      * Revert TFLite changes
      
      * Uncompile tests for CMSIS_NN because 32b bias isn't supported there yet
      Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
      b52e0f65
  10. 29 4月, 2022 1 次提交
  11. 28 4月, 2022 4 次提交
    • T
      Automated size log update (#1085) · 33362da8
      TFLM-bot 提交于
      33362da8
    • D
      Enable five dimension support in concatenation and export max size in... · a2e4fe94
      deqiangc 提交于
      Enable five dimension support in concatenation and export max size in runtime_shape.h to public (#1080)
      
      * Enable five dimension support in concatenation and export
      max size in runtime_shape.h to public
      
      There are models that use concatenation op to combine 5 dimension
      tensors and then use reshape to cast it to lower dimension.
      
      BUG=https://b/230028089
      
      * Rename kmaxDimension to kMaxSmallSize
      Co-authored-by: NTing Yan <94130036+tingyan19@users.noreply.github.com>
      Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
      a2e4fe94
    • T
      Port unidirectional sequence LSTM kernel from TFLite to TFLM (#1044) · ceac958d
      Ting Yan 提交于
      * Copy LSTM related files from TFLite with no changes
      
      * Format using clang-format
      
      * Fix cpplint issues
      
      * Add LSTM files to Makefile
      
      * Remove unnecessary __restrict__
      
      * Remove use of Eigen library
      
      * Remove tensor_utils_test.cc
      
      * Remove LSTM test files
      
      * Remove inclusion of cpu_backend_context.h
      
      * Remove inclusion of kernel_utils.h
      
      * Add inclusion of portable_tensor_utils.h
      
      * Remove LINT directives
      
      * Remove references to CpuBackendContext
      
      * Replace TF_LITE_ASSERT with TFLITE_DCHECK
      
      * Include vector library explictly
      
      * Explicitly specify float
      
      * Fix local vairable shaddow issue
      
      * Fix Register_UNIDIRECTIONAL_SEQUENCE_LSTM()
      
      * Rename included header files using C++ convention
      
      * Change int32 to int32_t
      
      * Replace vector with local array
      
      * Replace std::fill_n() with memset()
      
      * Replace std::copy_n() with std::memcpy()
      
      * Allocate pre-calculated biases in persistent memory
      
      * Change vector scales to pointer, nullptr for now
      
      * Replace Get* with AllocateTemp* and DeallocateTemp*
      
      * Use AllocateTempIntermediateTensor() to access intermediates
      
      * Remove GetTensorScale()
      
      * Use eval tensors
      
      * Use pointers to buffer instead of TfLiteTensors
      
      * Use persistent buffer for row sums
      
      * Update temporary tensor enum
      
      * Replace scratch_tensor_index with a scratch index array
      
      * Remove output resizing and code related to temporaries
      
      * Replace temporary tensors with scratch buffers
      
      * Use scratch buffer for scales
      
      * Flatten namespaces
      
      * Change QuantizeMultiplier's third parameter to compile
      
      * Explicitly convert from float to double
      
      * Fix int/int32_t incompatibility issue
      
      * Move tensor_utils.h into micro/kernels/ directory
      
      * Check against nullptr before calling GetTensorData()
      
      * Add test for lstm_eval.cc
      
      * Change int32_t to int to avoid compiler issues
      
      * Fix code style issue
      
      * Disable lstm_eval tests for xtensa
      
      * Change enum liberals from kLstm* to k*
      
      * Copy portable_tensor_utils.h into micro/kernels
      
      * Initialize activation to correct test failures
      
      * Enable test for lstm_eval in BUILD
      
      * Remove TODO without bug specified
      
      * Replace TF_LITE_KERNEL_LOG() with MicroPrintf()
      
      * Remove references to ruy
      
      * Create micro's own copy of tensor utils
      
      * Replace assert with TFLITE_ASSERT_FALSE
      
      * Remove TODO with no bug and change file position
      
      * Use TF_LITE_MICRO_* macros for tests
      
      * Add nullptr check before deallocate temp
      
      * Move Register_UNIDIRECTIONAL_SEQUENCE_LSTM() to flattened namespace
      
      * Add unit tests for integer unidirectional sequence LSTM
      
      * Format test and explicitly convert float to double
      
      * Fix unused variables for xtensa
      
      * Fix miscellaneous issues raised by review comments
      
      * Fix xtensa build issue
      Co-authored-by: Nmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
      ceac958d
    • 嵌入式AIoT's avatar
      037287fc
  12. 27 4月, 2022 2 次提交
  13. 26 4月, 2022 2 次提交
  14. 25 4月, 2022 1 次提交
  15. 22 4月, 2022 2 次提交
  16. 21 4月, 2022 2 次提交