1. 07 8月, 2021 2 次提交
    • D
      Add a new test case for conv operator (#329) (#377) · 751b36c5
      deqiangc 提交于
      * Add a new test case for conv operator (#329)
      
      This new test case is based on issue #329 to increase
      coverage on optimized kernel's data precision for conv operator.
      In this test, input, output and filter are all 8 bits and filter
      tensor is of dimension 8x3x3x3 with different scales per output channel.
      
      TESTED= local test with x86 and HiFi4.
      
      * Move large test data variable into its own folder
      and use one flat top namespace instead of nested namespace.
      
      BUG=195779890
      751b36c5
    • A
      Fixed ARC projects and library generation. (#371) · 6b8b699e
      Artem Tsvetkov 提交于
      6b8b699e
  2. 06 8月, 2021 1 次提交
  3. 28 7月, 2021 3 次提交
  4. 27 7月, 2021 1 次提交
  5. 09 7月, 2021 2 次提交
  6. 02 7月, 2021 1 次提交
  7. 30 6月, 2021 1 次提交
  8. 25 6月, 2021 1 次提交
  9. 22 6月, 2021 2 次提交
  10. 18 6月, 2021 1 次提交
    • F
      Add Arm Compiler 6 support to Corstone-300 target (#175) · f70e9b2e
      Fredrik Knutsson 提交于
      * Put location variables before target .inc's are included
      
      They can be useful in the target .inc
      
      Change-Id: I0ee3f77f79be272f4dc3502fb4a38017d8162fa1
      
      * Add ARMC6 compiler support for Corstone-300 target
      
        * Use fromelf instead of objcopy for armclang toolchain when
          generating a binary
        * Add ARMC6 linker and build flags to the Corstone-330 target
        * Add RETARGET macro in patch script to avoid undefined symbol
          build error for ARMC6.
        * Use a reduced set of Cortex-M CPU's for easier maintenance.
      
      Change-Id: Id9a20d57fa4fa0f1339f44523417e2dabfe7e152
      
      * Review comment - Override exit symbol only for GCC
      
      Change-Id: I93628c92ee352f36c7e7dd99351e4a73c29a8d30
      
      * Review comment - Change how to pass linker options for ARMC6
      
      Change-Id: Iacec5a6df6902bc8a14f460d63ae917039f982cc
      
      * Review comment - correct the upmerge
      
      Change-Id: I9a250f6a444336f9b95428a126a94a18f40060dc
      Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
      f70e9b2e
  11. 17 6月, 2021 1 次提交
    • A
      Explicitly use python3 from the Makefile. (#178) · 57c2576e
      Advait Jain 提交于
      For the Xtensa docker container, we were getting an error message along the lines of python not found. And the result was that the specialization was happening incorrectly (i.e. the specialization for the kernels was failing since it relies on python since #160).
      
      #182 is likely the reason why this error passed the CI for #160 but started failing after.
      57c2576e
  12. 16 6月, 2021 2 次提交
    • N
      Separate core, kernel and third party objects. (#168) · 6b799b7d
      Nat Jeffries 提交于
      * Separate core, kernel and third party objects.
      
      Add a second optimization level in the Makefile to enable different
      levels between kernels and the rest of the TFLM code. This results in a
      smaller binary with minimal performance impact compared with using a
      single optimization level.
      
      Allows the use of implicit patterns to compile all sources, choosing
      different flags for core, kernel and third party sources.
      
      The following measurements are taken using the hexagon toolchain +
      hexagon-size and hexagon-sim.
      
      For the keyword benchmark using -O2:
         text    data     bss     dec
        58140   37639   46612  142391
      
        Cycles: 1700364
      
      For the keyword benchmark using -O2 for kernels and -Oz for framework:
         text    data     bss     dec
        52796   37623   46612  137031
      
        Cycles: 1759664
      
      * Make the optimization level log an error.
      
      Remove OPTIMIZATION_LEVEL setting for bluepill since core framework now automatically is compiled with -Os.
      
      * Remove section that builds bluepill with -Os since default uses -Os.
      
      * Disable -Werror=vla in order to pass stm32 bare lib presubmit.
      
      * Chnage order so that -Wno-vla takes priority over -Wvla
      Co-authored-by: NAdvait Jain <advaitjain@users.noreply.github.com>
      6b799b7d
    • A
      Add Hexagon optimized kernels. (#160) · 217ad9fc
      Advait Jain 提交于
      * Add Hexagon optimized kernels.
      
       * Hexagon optimized kernels copied from https://source.codeaurora.org/quic/embedded_ai/tensorflow at 2d052806c211144875c89315a4fc6f1393064cf6
       * Changed the include paths and directory structure a bit.
       * Modified Makefile to allow optimized kernels to be in a separate directory
       * Path to the Hexagon lib is now specified on the command line.
      
      Verified that the optimized kernels are properly built and linked with:
      
      ```
      make -f tensorflow/lite/micro/tools/make/Makefile TARGET=hexagon OPTIMIZED_KERNEL_DIR=hexagon OPTIMIZED_KERNEL_DIR_PREFIX=third_party HEXAGON_TFLM_LIB=~/Qualcomm/tflm_google/hexagon_tflm_core.a  -j8 run_keyword_benchmark
      ```
      
      Gives:
      ```
      KeywordRunNIerations(1) took 52608 ticks (52 ms)
      ```
      
      Whereas reference kernels with:
      ```
      make -f tensorflow/lite/micro/tools/make/Makefile TARGET=hexagon  -j8 run_keyword_benchmark
      ```
      
      Gives:
      ```
      KeywordRunNIerations(1) took 110248 ticks (110 ms)
      ```
      
      BUG=http://b/190754463
      
      * enable code style checks for third_party as well.
      
      * clang-formatted the hexagon kernels.
      
      * Rename to keep scope focused on what we currently need.
      217ad9fc
  13. 02 6月, 2021 1 次提交
    • A
      Updated sync and use of shared TFL/TFLM code. (#104) · 23f608fd
      Advait Jain 提交于
      * The TFLM Makefile globs all the shared TFL/TFLM code.
       * This allows us to move the explicit list of sources and headers to the
         sync script since that is where we determine what code needs to be
         sync'd from upstream TF.
      
      With this change, we are ready to have the tflite_micro repository be
      the source of truth for all TFLM code and sync only the shared TFL/TFLM
      code from the tensorflow repo.
      
      Bug: http://b/182914089
      23f608fd
  14. 27 5月, 2021 1 次提交
  15. 25 5月, 2021 1 次提交
  16. 23 5月, 2021 1 次提交
  17. 21 5月, 2021 1 次提交
  18. 06 5月, 2021 1 次提交
  19. 04 5月, 2021 1 次提交
  20. 29 4月, 2021 1 次提交
  21. 21 4月, 2021 1 次提交
  22. 10 4月, 2021 1 次提交