- 20 3月, 2021 1 次提交
-
-
由 Liubov Batanina 提交于
Added OpenVINO ARM target * Added IE ARM target * Added OpenVINO ARM target * Delete ARM target * Detect ARM platform * Changed device name in ArmPlugin * Change ARM detection
-
- 04 12月, 2020 1 次提交
-
-
由 Wenqing Zhang 提交于
[GSoC] High Level API and Samples for Scene Text Detection and Recognition * APIs and samples for scene text detection and recognition * update APIs and tutorial for Text Detection and Recognition * API updates: (1) put decodeType into struct Voc (2) optimize the post-processing of DB * sample update: (1) add transformation into scene_text_spotting.cpp (2) modify text_detection.cpp with API update * update tutorial * simplify text recognition API update tutorial * update impl usage in recognize() and detect() * dnn: refactoring public API of TextRecognitionModel/TextDetectionModel * update provided models update opencv.bib * dnn: adjust text rectangle angle * remove points ordering operation in model.cpp * update gts of DB test in test_model.cpp * dnn: ensure to keep text rectangle angle - avoid 90/180 degree turns * dnn(text): use quadrangle result in TextDetectionModel API * dnn: update Text Detection API (1) keep points' order consistent with (bl, tl, tr, br) in unclip (2) update contourScore with boundingRect
-
- 02 12月, 2019 1 次提交
-
-
由 Lubov Batanina 提交于
* Support nGraph * Fix resize
-
- 21 10月, 2019 1 次提交
-
-
由 Yashas Samaga B L 提交于
CUDA backend for the DNN module * stub cuda4dnn design * minor fixes for tests and doxygen * add csl public api directory to module headers * add low-level CSL components * add high-level CSL components * integrate csl::Tensor into backbone code * switch to CPU iff unsupported; otherwise, fail on error * add fully connected layer * add softmax layer * add activation layers * support arbitary rank TensorDescriptor * pass input wrappers to `initCUDA()` * add 1d/2d/3d-convolution * add pooling layer * reorganize and refactor code * fixes for gcc, clang and doxygen; remove cxx14/17 code * add blank_layer * add LRN layer * add rounding modes for pooling layer * split tensor.hpp into tensor.hpp and tensor_ops.hpp * add concat layer * add scale layer * add batch normalization layer * split math.cu into activations.cu and math.hpp * add eltwise layer * add flatten layer * add tensor transform api * add asymmetric padding support for convolution layer * add reshape layer * fix rebase issues * add permute layer * add padding support for concat layer * refactor and reorganize code * add normalize layer * optimize bias addition in scale layer * add prior box layer * fix and optimize normalize layer * add asymmetric padding support for pooling layer * add event API * improve pooling performance for some padding scenarios * avoid over-allocation of compute resources to kernels * improve prior box performance * enable layer fusion * add const layer * add resize layer * add slice layer * add padding layer * add deconvolution layer * fix channelwise ReLU initialization * add vector traits * add vectorized versions of relu, clipped_relu, power * add vectorized concat kernels * improve concat_with_offsets performance * vectorize scale and bias kernels * add support for multi-billion element tensors * vectorize prior box kernels * fix address alignment check * improve bias addition performance of conv/deconv/fc layers * restructure code for supporting multiple targets * add DNN_TARGET_CUDA_FP64 * add DNN_TARGET_FP16 * improve vectorization * add region layer * improve tensor API, add dynamic ranks 1. use ManagedPtr instead of a Tensor in backend wrapper 2. add new methods to tensor classes - size_range: computes the combined size of for a given axis range - tensor span/view can be constructed from a raw pointer and shape 3. the tensor classes can change their rank at runtime (previously rank was fixed at compile-time) 4. remove device code from tensor classes (as they are unused) 5. enforce strict conditions on tensor class APIs to improve debugging ability * fix parametric relu activation * add squeeze/unsqueeze tensor API * add reorg layer * optimize permute and enable 2d permute * enable 1d and 2d slice * add split layer * add shuffle channel layer * allow tensors of different ranks in reshape primitive * patch SliceOp to allow Crop Layer * allow extra shape inputs in reshape layer * use `std::move_backward` instead of `std::move` for insert in resizable_static_array * improve workspace management * add spatial LRN * add nms (cpu) to region layer * add max pooling with argmax ( and a fix to limits.hpp) * add max unpooling layer * rename DNN_TARGET_CUDA_FP32 to DNN_TARGET_CUDA * update supportBackend to be more rigorous * remove stray include from preventing non-cuda build * include op_cuda.hpp outside condition #if * refactoring, fixes and many optimizations * drop DNN_TARGET_CUDA_FP64 * fix gcc errors * increase max. tensor rank limit to six * add Interp layer * drop custom layers; use BackendNode * vectorize activation kernels * fixes for gcc * remove wrong assertion * fix broken assertion in unpooling primitive * fix build errors in non-CUDA build * completely remove workspace from public API * fix permute layer * enable accuracy and perf. tests for DNN_TARGET_CUDA * add asynchronous forward * vectorize eltwise ops * vectorize fill kernel * fixes for gcc * remove CSL headers from public API * remove csl header source group from cmake * update min. cudnn version in cmake * add numerically stable FP32 log1pexp * refactor code * add FP16 specialization to cudnn based tensor addition * vectorize scale1 and bias1 + minor refactoring * fix doxygen build * fix invalid alignment assertion * clear backend wrappers before allocateLayers * ignore memory lock failures * do not allocate internal blobs * integrate NVTX * add numerically stable half precision log1pexp * fix indentation, following coding style, improve docs * remove accidental modification of IE code * Revert "add asynchronous forward" This reverts commit 1154b9da9da07e9b52f8a81bdcea48cf31c56f70. * [cmake] throw error for unsupported CC versions * fix rebase issues * add more docs, refactor code, fix bugs * minor refactoring and fixes * resolve warnings/errors from clang * remove haveCUDA() checks from supportBackend() * remove NVTX integration * changes based on review comments * avoid exception when no CUDA device is present * add color code for CUDA in Net::dump
-
- 04 10月, 2019 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 26 7月, 2019 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 23 6月, 2019 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 30 5月, 2019 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 25 4月, 2019 1 次提交
-
-
由 Dmitry Kurtaev 提交于
-
- 29 3月, 2019 1 次提交
-
-
由 Lubov Batanina 提交于
* Fix precision in tests for MyriadX * Fix ONNX tests * Add output range in ONNX tests * Skip tests on Myriad OpenVINO 2018R5 * Add detect MyriadX * Add detect MyriadX on OpenVINO R5 * Skip tests on Myriad next version of OpenVINO * dnn(ie): VPU type from environment variable * dnn(test): validate VPU type * dnn(test): update DLIE test skip conditions
-
- 21 2月, 2019 1 次提交
-
-
由 Dmitry Kurtaev 提交于
-
- 24 12月, 2018 1 次提交
-
-
由 Wu Zhiwen 提交于
-
- 06 12月, 2018 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 05 12月, 2018 1 次提交
-
-
由 Maksim Shabunin 提交于
DNN backends registry (#13332) * Added dnn backends registry * dnn: process DLIE/FPGA target
-
- 19 11月, 2018 1 次提交
-
-
由 Dmitry Kurtaev 提交于
-
- 15 11月, 2018 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 29 10月, 2018 1 次提交
-
-
由 WuZhiwen 提交于
* dnn: Add a Vulkan based backend This commit adds a new backend "DNN_BACKEND_VKCOM" and a new target "DNN_TARGET_VULKAN". VKCOM means vulkan based computation library. This backend uses Vulkan API and SPIR-V shaders to do the inference computation for layers. The layer types that implemented in DNN_BACKEND_VKCOM include: Conv, Concat, ReLU, LRN, PriorBox, Softmax, MaxPooling, AvePooling, Permute This is just a beginning work for Vulkan in OpenCV DNN, more layer types will be supported and performance tuning is on the way. Signed-off-by: NWu Zhiwen <zhiwen.wu@intel.com> * dnn/vulkan: Add FindVulkan.cmake to detect Vulkan SDK In order to build dnn with Vulkan support, need installing Vulkan SDK and setting environment variable "VULKAN_SDK" and add "-DWITH_VULKAN=ON" to cmake command. You can download Vulkan SDK from: https://vulkan.lunarg.com/sdk/home#linux For how to install, see https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.html https://vulkan.lunarg.com/doc/sdk/latest/windows/getting_started.html https://vulkan.lunarg.com/doc/sdk/latest/mac/getting_started.html respectively for linux, windows and mac. To run the vulkan backend, also need installing mesa driver. On Ubuntu, use this command 'sudo apt-get install mesa-vulkan-drivers' To test, use command '$BUILD_DIR/bin/opencv_test_dnn --gtest_filter=*VkCom*' Signed-off-by: NWu Zhiwen <zhiwen.wu@intel.com> * dnn/Vulkan: dynamically load Vulkan runtime No compile-time dependency on Vulkan library. If Vulkan runtime is unavailable, fallback to CPU path. Use environment "OPENCL_VULKAN_RUNTIME" to specify path to your own vulkan runtime library. Signed-off-by: NWu Zhiwen <zhiwen.wu@intel.com> * dnn/Vulkan: Add a python script to compile GLSL shaders to SPIR-V shaders The SPIR-V shaders are in format of text-based 32-bit hexadecimal numbers, and inserted into .cpp files as unsigned int32 array. * dnn/Vulkan: Put Vulkan headers into 3rdparty directory and some other fixes Vulkan header files are copied from https://github.com/KhronosGroup/Vulkan-Docs/tree/master/include/vulkan to 3rdparty/include Fix the Copyright declaration issue. Refine OpenCVDetectVulkan.cmake * dnn/Vulkan: Add vulkan backend tests into existing ones. Also fixed some test failures. - Don't use bool variable as uniform for shader - Fix dispathed group number beyond max issue - Bypass "group > 1" convolution. This should be support in future. * dnn/Vulkan: Fix multiple initialization in one thread.
-
- 19 9月, 2018 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 13 9月, 2018 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 04 9月, 2018 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 31 8月, 2018 2 次提交
-
-
由 Alexander Alekhin 提交于
-
由 Alexander Alekhin 提交于
- OpenCL tests didn't run any OpenCL kernels - use real configuration from existed models (the first 100 cases) - batch size = 1
-
- 26 6月, 2018 1 次提交
-
-
由 Dmitry Kurtaev 提交于
-
- 05 6月, 2018 1 次提交
-
-
由 Alexander Alekhin 提交于
-
- 31 5月, 2018 1 次提交
-
-
由 Dmitry Kurtaev 提交于
* Update Intel's Inference Engine deep learning backend * Remove cpu_extension dependency * Update Darknet accuracy tests
-
- 19 4月, 2018 1 次提交
-
-
由 Dmitry Kurtaev 提交于
-
- 20 11月, 2017 1 次提交
-
-
由 David Geldreich 提交于
add a corresponding test
-
- 26 6月, 2017 1 次提交
-
- 22 5月, 2017 1 次提交
-
-
由 Amro 提交于
plus minor edits and fixes
-
- 18 1月, 2014 1 次提交
-
-
由 Ilya Lavrenov 提交于
-
- 02 9月, 2013 1 次提交
-
-
由 Fedor Morozov 提交于
-
- 26 8月, 2013 1 次提交
-
-
由 Fedor Morozov 提交于
-
- 05 8月, 2013 1 次提交
-
-
由 Fedor Morozov 提交于
-
- 03 8月, 2013 1 次提交
-
-
由 Alexander Shishkov 提交于
-
- 01 8月, 2013 1 次提交
-
-
由 Fedor Morozov 提交于
-
- 12 5月, 2010 1 次提交
-
-
由 Vadim Pisarevsky 提交于
-