1. 12 12月, 2019 1 次提交
    • A
      Merge pull request #16088 from alalek:dnn_eltwise_layer_different_src_channels · 5ee7abbe
      Alexander Alekhin 提交于
      dnn(eltwise): fix handling of different number of channels
      
      * dnn(test): reproducer for Eltwise layer issue from PR16063
      
      * dnn(eltwise): rework support for inputs with different channels
      
      * dnn(eltwise): get rid of finalize(), variableChannels
      
      * dnn(eltwise): update input sorting by number of channels
      
      - do not swap inputs if number of channels are same after truncation
      
      * dnn(test): skip "shortcut" with batch size 2 on MYRIAD targets
      5ee7abbe
  2. 11 12月, 2019 10 次提交
  3. 10 12月, 2019 12 次提交
  4. 09 12月, 2019 5 次提交
    • A
      Added Xperience.AI to copyright file. · 766465ce
      Alexander Smorkalov 提交于
      766465ce
    • P
      Merge pull request #15257 from pmur:resize · a011035e
      Paul Murphy 提交于
      * resize: HResizeLinear reduce duplicate work
      
      There appears to be a 2x unroll of the HResizeLinear against k,
      however the k value is only incremented by 1 during the unroll. This
      results in k - 1 duplicate passes when k > 1.
      
      Likewise, the final pass may not respect the work done by the vector
      loop. Start it with the offset returned by the vector op if
      implemented. Note, no vector ops are implemented today.
      
      The performance is most noticable on a linear downscale. A set of
      performance tests are added to characterize this.  The performance
      improvement is 10-50% depending on the scaling.
      
      * imgproc: vectorize HResizeLinear
      
      Performance is mostly gated by the gather operations
      for x inputs.
      
      Likewise, provide a 2x unroll against k, this reduces the
      number of alpha gathers by 1/2 for larger k.
      
      While not a 4x improvement, it still performs substantially
      better under P9 for a 1.4x improvement. P8 baseline is
      1.05-1.10x due to reduced VSX instruction set.
      
      For float types, this results in a more modest
      1.2x improvement.
      
      * Update U8 processing for non-bitexact linear resize
      
      * core: hal: vsx: improve v_load_expand_q
      
      With a little help, we can do this quickly without gprs on
      all VSX enabled targets.
      
      * resize: Fix cn == 3 step per feedback
      
      Per feedback, ensure we don't overrun. This was caught via the
      failure observed in Test_TensorFlow.inception_accuracy.
      a011035e
    • A
      Merge pull request #16085 from alalek:imgproc_threshold_to_zero_ipp_bug · 734de34b
      Alexander Alekhin 提交于
      * imgproc(IPP): wrong result from threshold(THRESH_TOZERO)
      
      * imgproc(IPP): disable IPP code to pass THRESH_TOZERO test
      734de34b
    • D
      Remove Dummy layer · 883c4c60
      Dmitry Kurtaev 提交于
      883c4c60
    • A
      dnn: clarify error message from getMemoryShapes() · b1b505f7
      Alexander Alekhin 提交于
      b1b505f7
  5. 08 12月, 2019 1 次提交
  6. 07 12月, 2019 2 次提交
  7. 06 12月, 2019 9 次提交