1. 27 5月, 2019 2 次提交
  2. 25 5月, 2019 3 次提交
  3. 24 5月, 2019 9 次提交
    • M
      [MKL-DNN] Add Fully Connected Op for inference only(#15226) · 0c39b97b
      Michał Gallus 提交于
      * fuse mul and elementwise add to fc
      
      * Reimplement the FC forward operator
      
      * Fix FC MKLDNN integration by transposing weights
      
      * Add FC MKLDNN Pass
      
      test=develop
      
      * FC MKLDNN Pass: change memcpy to std::copy
      
      * Fix MKLDNN FC handling of mismatch input and weights dims
      
      * Lower tolerance for MKL-DNN in resnet50 test
      
      test=develop
      
      * Adjust FC to support MKLDNN Op placement
      
      test=develop
      
      * Adjust Placement Op to set use_mkldnn attribute for graph
      
      test=develop
      
      * MKLDNN FC: fix weights format so that gemm version is called
      
      test=develop
      
      * FC MKLDNN: Remove tolerance decrease from tester_helper
      
      * FC MKL-DNN: Refactor the code, change input reorder to weight reorder
      
      * MKL-DNN FC: Introduce operator caching
      
      test=develop
      
      * FC MKL-DNN: Fix the tensor type in ExpectedKernelType
      
      test=develop
      
      * FC MKL-DNN: fix style changes
      
      test=develop
      
      * FC MKL-DNN: fallback to native on non-supported dim sizes
      
      test=develop
      
      * FC MKLDNN: fix CMake paths
      
      test=develop
      
      * FC MKLDNN: Refine placement pass graph mkldnn attribute
      
      test=develop
      
      * Fix Transpiler error for fuse_conv_eltwise
      
      test=develop
      
      * Fix missing STL includes in files
      
      test=develop
      
      * FC MKL-DNN: Enable new output size computation
      
      Also, refine pass to comply with newest interface.
      test=develop
      
      * FC MKL-DNN: enable only when fc_mkldnn_pass is enabled
      
      * FC MKL-DNN: Allow Weights to use oi or io format
      
      * FC MKL-DNN: Adjust UT to work with correct dims
      
      test=develop
      
      * Enable MKL DEBUG for resnet50 analyzer
      
      test=develop
      
      * FC MKL-DNN: Improve Hashing function
      
      test=develop
      
      * FC MKL-DNN: Fix shape for fc weights in transpiler
      
      * FC MKL-DNN: Update input pointer in re-used fc primitive
      
      * Add log for not handling fc fuse for unsupported dims
      
      test=develop
      
      * FC MKL-DNN: Move transpose from pass to Op Kernel
      
      test=develop
      
      * FC MKL-DNN: Disable transpose in unit test
      
      test=develop
      
      * FC MKL-DNN: Remove fc_mkldnn_pass from default list
      
      * Correct Flag for fake data analyzer tests
      
      test=develop
      
      * FC MKL-DNN: Add comment about fc mkldnn pass disablement
      
      test=develop
      
      * FC MKL-DNN: Disable fc in int8 tests
      
      test=develop
      0c39b97b
    • K
      Enable logical operators for the nGraph Bridge. (#17543) · e9216d06
      Krzysztof Binias 提交于
      test=develop
      e9216d06
    • Y
      Fix trust ratio in lamb (#17614) · e8990e64
      Yibing Liu 提交于
      test=develop
      e8990e64
    • C
      Add broadcast operators (#17503) · b5f4d5ed
      chengduo 提交于
      * This PR adds broadcast for multi-process. And it could be used in dynamic graph to broadcast parameters.
      b5f4d5ed
    • S
      fix quantize_squash_pass segfault when no tensor linked to Bias (#17292) · bccb0ba4
      Sylwester Fraczek 提交于
      * fix quantize_squash_pass segfault when there is no tensor linked do Bias input
      
      test=develop
      
      * add googlenet test
      
      test=develop
      
      * fix concat CreateKey not using input format
      
      test=develop
      bccb0ba4
    • M
      [NGraph] Enable elementwise mul operator (#17552) · 0d4cbdad
      mozga-intel 提交于
      0d4cbdad
    • M
      [NGraph] Enable assign operator for a ngraph, test=develop (#17437) · f2694e12
      mozga-intel 提交于
      *  Enable assign operator for a ngraph, test=develop
      
      * Cross_entropy operators needs to be updated
      f2694e12
    • M
      Enable elementwise sub operator for ngraph (#17527) · cf02cb5e
      mozga-intel 提交于
      cf02cb5e
    • T
      [CPU] refine cpu softmax bwd (#17534) · 7ae461eb
      tensor-tang 提交于
      * refine softmax fwd
      
      test=develop
      
      * refine cpu softmax bwd
      
      test=develop
      
      * fix batch size
      
      test=develop
      
      * fix compile issue with gpu
      
      test=develop
      
      * add value clip
      7ae461eb
  4. 23 5月, 2019 5 次提交
  5. 22 5月, 2019 4 次提交
  6. 21 5月, 2019 6 次提交
  7. 20 5月, 2019 2 次提交
    • Q
      Optimize communicator flags (#17494) · 287de41c
      Qiao Longfei 提交于
      * optimize communicator flag
      
      * change flags in init py test=develop
      287de41c
    • L
      Double backward elementwise div (#17416) · 10b23a72
      lvmengsi 提交于
      * double backward, elementwise_div
      
      * fix dx empty. test=develop
      
      * bug fix (#17392)
      
      fix secure bug
      
      * Eanble stack operator for a Ngraph, test=develop (#17406)
      
      * fix sqrt_grad_grad unittest. test=develop (#17410)
      
      * fix sqrt_grad_grad unittest. test=develop
      
      * disable sqrt_grad_grad unittest. test=develop
      
      * test=develop, fix unittest
      
      * test=develop, fix unittest
      
      * test=develop, fix unittest
      
      * test=develop, fix bug
      
      * fix unittest. test=develop
      
      * fix unittest dx. test=develop
      
      * tmp fix! for test... test=develop
      
      * reduce tmp, test=develop
      
      * test=develop, reduce tmp
      
      * fix broadcast unittest. test=develop
      
      * fix format. test=develop
      
      * refine code. test=develop
      
      * refine code. test=develop
      
      * refine GetDoubleGradSafeTensor. test=develop
      
      * fix format. test=develop
      10b23a72
  8. 19 5月, 2019 1 次提交
  9. 18 5月, 2019 1 次提交
  10. 17 5月, 2019 3 次提交
  11. 16 5月, 2019 2 次提交
  12. 15 5月, 2019 2 次提交