1. 31 1月, 2023 1 次提交
  2. 21 12月, 2022 1 次提交
    • Y
      dnn: add the CANN backend (#22634) · a2b3acfc
      Yuantao Feng 提交于
      * cann backend impl v1
      
      * cann backend impl v2: use opencv parsers to build models for cann
      
      * adjust fc according to the new transA and transB
      
      * put cann net in cann backend node and reuse forwardLayer
      
      * use fork() to create a child process and compile cann model
      
      * remove legacy code
      
      * remove debug code
      
      * fall bcak to CPU backend if there is one layer not supoorted by CANN backend
      
      * fix netInput forward
      a2b3acfc
  3. 20 12月, 2022 1 次提交
  4. 15 12月, 2022 1 次提交
  5. 29 11月, 2022 1 次提交
  6. 10 9月, 2022 1 次提交
  7. 06 7月, 2022 1 次提交
  8. 01 4月, 2022 1 次提交
  9. 10 2月, 2022 1 次提交
  10. 30 11月, 2021 1 次提交
    • A
      Merge pull request #20658 from smbz:lstm_optimisation · ea7d4be3
      Andrew Ryrie 提交于
      * dnn: LSTM optimisation
      
      This uses the AVX-optimised fastGEMM1T for matrix multiplications where available, instead of the standard cv::gemm.
      
      fastGEMM1T is already used by the fully-connected layer.  This commit involves two minor modifications:
       - Use unaligned access.  I don't believe this involves any performance hit in on modern CPUs (Nehalem and Bulldozer onwards) in the case where the address is actually aligned.
       - Allow for weight matrices where the number of columns is not a multiple of 8.
      
      I have not enabled AVX-512 as I don't have an AVX-512 CPU to test on.
      
      * Fix warning about initialisation order
      
      * Remove C++11 syntax
      
      * Fix build when AVX(2) is not available
      
      In this case the CV_TRY_X macros are defined to 0, rather than being undefined.
      
      * Minor changes as requested:
      
       - Don't check hardware support for AVX(2) when dispatch is disabled for these
       - Add braces
      
      * Fix out-of-bounds access in fully connected layer
      
      The old tail handling in fastGEMM1T implicitly rounded vecsize up to the next multiple of 8, and the fully connected layer implements padding up to the next multiple of 8 to cope with this.  The new tail handling does not round the vecsize upwards like this but it does require that the vecsize is at least 8.  To adapt to the new tail handling, the fully connected layer now rounds vecsize itself at the same time as adding the padding(which makes more sense anyway).
      
      This also means that the fully connected layer always passes a vecsize of at least 8 to fastGEMM1T, which fixes the out-of-bounds access problems.
      
      * Improve tail mask handling
      
       - Use static array for generating tail masks (as requested)
       - Apply tail mask to the weights as well as the input vectors to prevent spurious propagation of NaNs/Infs
      
      * Revert whitespace change
      
      * Improve readability of conditions for using AVX
      
      * dnn(lstm): minor coding style changes, replaced left aligned load
      ea7d4be3
  11. 24 11月, 2021 1 次提交
    • H
      Merge pull request #20406 from MarkGHX:gsoc_2021_webnn · 1fcf7ba5
      Hanxi Guo 提交于
      [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN
      
      * Add WebNN backend for OpenCV DNN Module
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Add WebNN head files into OpenCV 3rd partiy files
      
      Create webnn.hpp
      
      update cmake
      
      Complete README and add OpenCVDetectWebNN.cmake file
      
      add webnn.cpp
      
      Modify webnn.cpp
      
      Can successfully compile the codes for creating a MLContext
      
      Update webnn.cpp
      
      Update README.md
      
      Update README.md
      
      Update README.md
      
      Update README.md
      
      Update cmake files and
      
      update README.md
      
      Update OpenCVDetectWebNN.cmake and README.md
      
      Update OpenCVDetectWebNN.cmake
      
      Fix OpenCVDetectWebNN.cmake and update README.md
      
      Add source webnn_cpp.cpp and libary libwebnn_proc.so
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      update dnn.cpp
      
      update op_webnn
      
      update op_webnn
      
      Update op_webnn.hpp
      
      update op_webnn.cpp & hpp
      
      Update op_webnn.hpp
      
      Update op_webnn
      
      update the skeleton
      
      Update op_webnn.cpp
      
      Update op_webnn
      
      Update op_webnn.cpp
      
      Update op_webnn.cpp
      
      Update op_webnn.hpp
      
      update op_webnn
      
      update op_webnn
      
      Solved the problems of released variables.
      
      Fixed the bugs in op_webnn.cpp
      
      Implement op_webnn
      
      Implement Relu by WebNN API
      
      Update dnn.cpp for better test
      
      Update elementwise_layers.cpp
      
      Implement ReLU6
      
      Update elementwise_layers.cpp
      
      Implement SoftMax using WebNN API
      
      Implement Reshape by WebNN API
      
      Implement PermuteLayer by WebNN API
      
      Implement PoolingLayer using WebNN API
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Implement poolingLayer by WebNN API and add more detailed logs
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Remove redundant codes and add more logs for poolingLayer
      
      Add more logs in the pooling layer implementation
      
      Fix the indent issue and resolve the compiling issue
      
      Fix the build problems
      
      Fix the build issue
      
      FIx the build issue
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      * Fix the build issue
      
      * Implement BatchNorm Layer by WebNN API
      
      * Update convolution_layer.cpp
      
      This is a temporary file for Conv2d layer implementation
      
      * Integrate some general functions into op_webnn.cpp&hpp
      
      * Update const_layer.cpp
      
      * Update convolution_layer.cpp
      
      Still have some bugs that should be fixed.
      
      * Update conv2d layer and fc layer
      
      still have some problems to be fixed.
      
      * update constLayer, conv layer, fc layer
      
      There are still some bugs to be fixed.
      
      * Fix the build issue
      
      * Update concat_layer.cpp
      
      Still have some bugs to be fixed.
      
      * Update conv2d layer, fully connected layer and const layer
      
      * Update convolution_layer.cpp
      
      * Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)
      
      * Delete bib19450.aux
      
      * Add WebNN backend for OpenCV DNN Module
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Add WebNN head files into OpenCV 3rd partiy files
      
      Create webnn.hpp
      
      update cmake
      
      Complete README and add OpenCVDetectWebNN.cmake file
      
      add webnn.cpp
      
      Modify webnn.cpp
      
      Can successfully compile the codes for creating a MLContext
      
      Update webnn.cpp
      
      Update README.md
      
      Update README.md
      
      Update README.md
      
      Update README.md
      
      Update cmake files and
      
      update README.md
      
      Update OpenCVDetectWebNN.cmake and README.md
      
      Update OpenCVDetectWebNN.cmake
      
      Fix OpenCVDetectWebNN.cmake and update README.md
      
      Add source webnn_cpp.cpp and libary libwebnn_proc.so
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      update dnn.cpp
      
      update op_webnn
      
      update op_webnn
      
      Update op_webnn.hpp
      
      update op_webnn.cpp & hpp
      
      Update op_webnn.hpp
      
      Update op_webnn
      
      update the skeleton
      
      Update op_webnn.cpp
      
      Update op_webnn
      
      Update op_webnn.cpp
      
      Update op_webnn.cpp
      
      Update op_webnn.hpp
      
      update op_webnn
      
      update op_webnn
      
      Solved the problems of released variables.
      
      Fixed the bugs in op_webnn.cpp
      
      Implement op_webnn
      
      Implement Relu by WebNN API
      
      Update dnn.cpp for better test
      
      Update elementwise_layers.cpp
      
      Implement ReLU6
      
      Update elementwise_layers.cpp
      
      Implement SoftMax using WebNN API
      
      Implement Reshape by WebNN API
      
      Implement PermuteLayer by WebNN API
      
      Implement PoolingLayer using WebNN API
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Update pooling_layer.cpp
      
      Implement poolingLayer by WebNN API and add more detailed logs
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      Remove redundant codes and add more logs for poolingLayer
      
      Add more logs in the pooling layer implementation
      
      Fix the indent issue and resolve the compiling issue
      
      Fix the build problems
      
      Fix the build issue
      
      FIx the build issue
      
      Update dnn.cpp
      
      Update dnn.cpp
      
      * Fix the build issue
      
      * Implement BatchNorm Layer by WebNN API
      
      * Update convolution_layer.cpp
      
      This is a temporary file for Conv2d layer implementation
      
      * Integrate some general functions into op_webnn.cpp&hpp
      
      * Update const_layer.cpp
      
      * Update convolution_layer.cpp
      
      Still have some bugs that should be fixed.
      
      * Update conv2d layer and fc layer
      
      still have some problems to be fixed.
      
      * update constLayer, conv layer, fc layer
      
      There are still some bugs to be fixed.
      
      * Update conv2d layer, fully connected layer and const layer
      
      * Update convolution_layer.cpp
      
      * Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)
      
      * Update dnn.cpp
      
      * Fix Error in dnn.cpp
      
      * Resolve duplication in conditions in convolution_layer.cpp
      
      * Fixed the issues in the comments
      
      * Fix building issue
      
      * Update tutorial
      
      * Fixed comments
      
      * Address the comments
      
      * Update CMakeLists.txt
      
      * Offer more accurate perf test on native
      
      * Add better perf tests for both native and web
      
      * Modify per tests for better results
      
      * Use more latest version of Electron
      
      * Support latest WebNN Clamp op
      
      * Add definition of HAVE_WEBNN macro
      
      * Support group convolution
      
      * Implement Scale_layer using WebNN
      
      * Add Softmax option for native classification example
      
      * Fix comments
      
      * Fix comments
      1fcf7ba5
  12. 19 8月, 2021 1 次提交
  13. 11 8月, 2021 1 次提交
    • H
      Merge pull request #20287 from hanliutong:dev-rvv-0.10 · aaca4987
      HAN Liutong 提交于
      Optimization of DNN using native RISC-V vector intrinsics.
      
      * Use RVV to optimize fastGEMM (FP32) in DNN.
      
      * Use RVV to optimize fastGEMM1T in DNN.
      
      * Use RVV to optimize fastConv in DNN.
      
      * Use RVV to optimize fastDepthwiseConv in DNN.
      
      * Vectorize tails using vl.
      
      * Use "vl" instead of scalar to handle small block in fastConv.
      
      * Fix memory access out of bound in "fastGEMM1T".
      
      * Remove setvl.
      
      * Remove useless initialization.
      
      * Use loop unrolling to handle tail part instead of switch.
      aaca4987
  14. 22 5月, 2021 1 次提交
  15. 04 2月, 2021 1 次提交
  16. 05 12月, 2020 1 次提交
  17. 15 8月, 2020 1 次提交
  18. 03 8月, 2020 1 次提交
  19. 07 4月, 2020 1 次提交
    • L
      Merge pull request #16840 from l-bat:matmul_inputs · 73477141
      Liubov Batanina 提交于
      * Supported FullyConnected layer with two inputs
      
      * Skipped test
      
      * Fix conditions
      
      * Added OpenCL support
      
      * Supported ReduceMean3D
      
      * Supported Expand layer
      
      * Fix warning
      
      * Added Normalize subgraph
      
      * refactoring
      
      * Used addLayer
      
      * Fix check
      
      * Used addLayer
      
      * Skip failed test
      
      * Added normalize1 subgraph
      
      * Fix comments
      73477141
  20. 03 3月, 2020 2 次提交
  21. 02 12月, 2019 1 次提交
  22. 21 10月, 2019 1 次提交
    • Y
      Merge pull request #14827 from YashasSamaga:cuda4dnn-csl-low · 613c12e5
      Yashas Samaga B L 提交于
      CUDA backend for the DNN module
      
      * stub cuda4dnn design
      
      * minor fixes for tests and doxygen
      
      * add csl public api directory to module headers
      
      * add low-level CSL components
      
      * add high-level CSL components
      
      * integrate csl::Tensor into backbone code
      
      * switch to CPU iff unsupported; otherwise, fail on error
      
      * add fully connected layer
      
      * add softmax layer
      
      * add activation layers
      
      * support arbitary rank TensorDescriptor
      
      * pass input wrappers to `initCUDA()`
      
      * add 1d/2d/3d-convolution
      
      * add pooling layer
      
      * reorganize and refactor code
      
      * fixes for gcc, clang and doxygen; remove cxx14/17 code
      
      * add blank_layer
      
      * add LRN layer
      
      * add rounding modes for pooling layer
      
      * split tensor.hpp into tensor.hpp and tensor_ops.hpp
      
      * add concat layer
      
      * add scale layer
      
      * add batch normalization layer
      
      * split math.cu into activations.cu and math.hpp
      
      * add eltwise layer
      
      * add flatten layer
      
      * add tensor transform api
      
      * add asymmetric padding support for convolution layer
      
      * add reshape layer
      
      * fix rebase issues
      
      * add permute layer
      
      * add padding support for concat layer
      
      * refactor and reorganize code
      
      * add normalize layer
      
      * optimize bias addition in scale layer
      
      * add prior box layer
      
      * fix and optimize normalize layer
      
      * add asymmetric padding support for pooling layer
      
      * add event API
      
      * improve pooling performance for some padding scenarios
      
      * avoid over-allocation of compute resources to kernels
      
      * improve prior box performance
      
      * enable layer fusion
      
      * add const layer
      
      * add resize layer
      
      * add slice layer
      
      * add padding layer
      
      * add deconvolution layer
      
      * fix channelwise  ReLU initialization
      
      * add vector traits
      
      * add vectorized versions of relu, clipped_relu, power
      
      * add vectorized concat kernels
      
      * improve concat_with_offsets performance
      
      * vectorize scale and bias kernels
      
      * add support for multi-billion element tensors
      
      * vectorize prior box kernels
      
      * fix address alignment check
      
      * improve bias addition performance of conv/deconv/fc layers
      
      * restructure code for supporting multiple targets
      
      * add DNN_TARGET_CUDA_FP64
      
      * add DNN_TARGET_FP16
      
      * improve vectorization
      
      * add region layer
      
      * improve tensor API, add dynamic ranks
      
      1. use ManagedPtr instead of a Tensor in backend wrapper
      2. add new methods to tensor classes
        - size_range: computes the combined size of for a given axis range
        - tensor span/view can be constructed from a raw pointer and shape
      3. the tensor classes can change their rank at runtime (previously rank was fixed at compile-time)
      4. remove device code from tensor classes (as they are unused)
      5. enforce strict conditions on tensor class APIs to improve debugging ability
      
      * fix parametric relu activation
      
      * add squeeze/unsqueeze tensor API
      
      * add reorg layer
      
      * optimize permute and enable 2d permute
      
      * enable 1d and 2d slice
      
      * add split layer
      
      * add shuffle channel layer
      
      * allow tensors of different ranks in reshape primitive
      
      * patch SliceOp to allow Crop Layer
      
      * allow extra shape inputs in reshape layer
      
      * use `std::move_backward` instead of `std::move` for insert in resizable_static_array
      
      * improve workspace management
      
      * add spatial LRN
      
      * add nms (cpu) to region layer
      
      * add max pooling with argmax ( and a fix to limits.hpp)
      
      * add max unpooling layer
      
      * rename DNN_TARGET_CUDA_FP32 to DNN_TARGET_CUDA
      
      * update supportBackend to be more rigorous
      
      * remove stray include from preventing non-cuda build
      
      * include op_cuda.hpp outside condition #if
      
      * refactoring, fixes and many optimizations
      
      * drop DNN_TARGET_CUDA_FP64
      
      * fix gcc errors
      
      * increase max. tensor rank limit to six
      
      * add Interp layer
      
      * drop custom layers; use BackendNode
      
      * vectorize activation kernels
      
      * fixes for gcc
      
      * remove wrong assertion
      
      * fix broken assertion in unpooling primitive
      
      * fix build errors in non-CUDA build
      
      * completely remove workspace from public API
      
      * fix permute layer
      
      * enable accuracy and perf. tests for DNN_TARGET_CUDA
      
      * add asynchronous forward
      
      * vectorize eltwise ops
      
      * vectorize fill kernel
      
      * fixes for gcc
      
      * remove CSL headers from public API
      
      * remove csl header source group from cmake
      
      * update min. cudnn version in cmake
      
      * add numerically stable FP32 log1pexp
      
      * refactor code
      
      * add FP16 specialization to cudnn based tensor addition
      
      * vectorize scale1 and bias1 + minor refactoring
      
      * fix doxygen build
      
      * fix invalid alignment assertion
      
      * clear backend wrappers before allocateLayers
      
      * ignore memory lock failures
      
      * do not allocate internal blobs
      
      * integrate NVTX
      
      * add numerically stable half precision log1pexp
      
      * fix indentation, following coding style,  improve docs
      
      * remove accidental modification of IE code
      
      * Revert "add asynchronous forward"
      
      This reverts commit 1154b9da9da07e9b52f8a81bdcea48cf31c56f70.
      
      * [cmake] throw error for unsupported CC versions
      
      * fix rebase issues
      
      * add more docs, refactor code, fix bugs
      
      * minor refactoring and fixes
      
      * resolve warnings/errors from clang
      
      * remove haveCUDA() checks from supportBackend()
      
      * remove NVTX integration
      
      * changes based on review comments
      
      * avoid exception when no CUDA device is present
      
      * add color code for CUDA in Net::dump
      613c12e5
  23. 29 8月, 2019 1 次提交
  24. 14 6月, 2019 1 次提交
  25. 19 2月, 2019 1 次提交
  26. 17 1月, 2019 1 次提交
  27. 16 11月, 2018 1 次提交
  28. 15 11月, 2018 1 次提交
  29. 05 10月, 2018 1 次提交
  30. 26 9月, 2018 1 次提交
  31. 07 9月, 2018 1 次提交
  32. 06 9月, 2018 1 次提交
  33. 30 8月, 2018 1 次提交
  34. 02 8月, 2018 1 次提交
  35. 05 7月, 2018 1 次提交
  36. 25 6月, 2018 1 次提交
  37. 04 6月, 2018 1 次提交
  38. 23 5月, 2018 1 次提交
  39. 16 5月, 2018 1 次提交