- 29 5月, 2019 4 次提交
-
-
由 baojun 提交于
* add depthwise_conv2d test=develop * use cpu for ngraph test=develop
-
由 gongweibao 提交于
-
由 mozga-intel 提交于
-
由 Yiqun Liu 提交于
Optimize the concat and split kernel for specical cases when the number of inputs/outputs is 2 (#17415) * Optimize the concat and split kernel for special cases that the number of inputs/outputs is 2. test=develop * Refine codes. test=develop * Correct the condition. test=develop * Move the define of tmp_data outside the if statement. * Print the cudnn minor version. test=develop * Fix the case when in_num/o_num is 1 in concat/split op. test=develop * Remove const_cast. test=develop
-
- 28 5月, 2019 2 次提交
-
-
由 lidanqing 提交于
* add INT8 conv+relu6 fuse and enbale mobilentv2 INT8 test test=develop * change fasle and 0.0 to fuse_brelu and brelu_threshold test=develop change the "fuse_relu||fuse_brelu" to "unsigned_output" test=develop * Use relu instead of brelu as INT8 post-op because INT8 brelu is not enabled in mkldnn v0.18 test=develop * continuous-integration fix test=develop
-
- 27 5月, 2019 7 次提交
-
-
由 Krzysztof Binias 提交于
* Enable sqrt operator for the nGraph Bridge. test=develop * Update activation_op.h
-
由 Sylwester Fraczek 提交于
* add Concat quantization add unit test for quantizing concat fix for wrong value when the input is not in map of calculated scales add use_quantizer to concat_op.cc add scale_algo rules for concat test=develop * missing fix for multiple inputs quantize-squash * wojtuss review fix: adding comment test=develop
-
由 gongweibao 提交于
-
由 Krzysztof Binias 提交于
test=develop
-
由 chengduo 提交于
* enhance print
-
由 Zeng Jinle 提交于
* Revert "Revert "Fix allocator bug"" This reverts commit 174d0d0b. * Revert "fix travis ci" This reverts commit 5656fa9f. test=develop * add inlined_vector.h, test=develop * add inlined_vector_test,test=develop * clean code of allocator,test=develop * delete zero_size_allocator.h,test=develop * fix failed unittest,test=develop
-
由 Guo Sheng 提交于
test=develop
-
- 25 5月, 2019 3 次提交
-
-
由 hutuxian 提交于
* gather_op support int64_t index by adding a template typename * add UT and rename typename test=develop
-
由 mozga-intel 提交于
-
由 Zhaolong Xing 提交于
* fluid int8 train and trt int8 predict align. trt int8 predict init op converter * 2. align fluid int8 train and trt int8 inference. enhance quant dequant fuse pass enhance op converter, trt engine, trt engine op, trt subgraph pass. * 3. add delete_quant_dequant_pass for trt test=develop * 4. add the missing file test=develop * 5. i modify the c++ interface, but forget to modify the pybind code fix the IS_TRT_VERSION_GE bug, and fix elementwise op converter test=develop
-
- 24 5月, 2019 9 次提交
-
-
由 Michał Gallus 提交于
* fuse mul and elementwise add to fc * Reimplement the FC forward operator * Fix FC MKLDNN integration by transposing weights * Add FC MKLDNN Pass test=develop * FC MKLDNN Pass: change memcpy to std::copy * Fix MKLDNN FC handling of mismatch input and weights dims * Lower tolerance for MKL-DNN in resnet50 test test=develop * Adjust FC to support MKLDNN Op placement test=develop * Adjust Placement Op to set use_mkldnn attribute for graph test=develop * MKLDNN FC: fix weights format so that gemm version is called test=develop * FC MKLDNN: Remove tolerance decrease from tester_helper * FC MKL-DNN: Refactor the code, change input reorder to weight reorder * MKL-DNN FC: Introduce operator caching test=develop * FC MKL-DNN: Fix the tensor type in ExpectedKernelType test=develop * FC MKL-DNN: fix style changes test=develop * FC MKL-DNN: fallback to native on non-supported dim sizes test=develop * FC MKLDNN: fix CMake paths test=develop * FC MKLDNN: Refine placement pass graph mkldnn attribute test=develop * Fix Transpiler error for fuse_conv_eltwise test=develop * Fix missing STL includes in files test=develop * FC MKL-DNN: Enable new output size computation Also, refine pass to comply with newest interface. test=develop * FC MKL-DNN: enable only when fc_mkldnn_pass is enabled * FC MKL-DNN: Allow Weights to use oi or io format * FC MKL-DNN: Adjust UT to work with correct dims test=develop * Enable MKL DEBUG for resnet50 analyzer test=develop * FC MKL-DNN: Improve Hashing function test=develop * FC MKL-DNN: Fix shape for fc weights in transpiler * FC MKL-DNN: Update input pointer in re-used fc primitive * Add log for not handling fc fuse for unsupported dims test=develop * FC MKL-DNN: Move transpose from pass to Op Kernel test=develop * FC MKL-DNN: Disable transpose in unit test test=develop * FC MKL-DNN: Remove fc_mkldnn_pass from default list * Correct Flag for fake data analyzer tests test=develop * FC MKL-DNN: Add comment about fc mkldnn pass disablement test=develop * FC MKL-DNN: Disable fc in int8 tests test=develop
-
由 Krzysztof Binias 提交于
test=develop
-
由 Yibing Liu 提交于
test=develop
-
由 chengduo 提交于
* This PR adds broadcast for multi-process. And it could be used in dynamic graph to broadcast parameters.
-
由 Sylwester Fraczek 提交于
* fix quantize_squash_pass segfault when there is no tensor linked do Bias input test=develop * add googlenet test test=develop * fix concat CreateKey not using input format test=develop
-
由 mozga-intel 提交于
-
由 mozga-intel 提交于
* Enable assign operator for a ngraph, test=develop * Cross_entropy operators needs to be updated
-
由 mozga-intel 提交于
-
由 tensor-tang 提交于
* refine softmax fwd test=develop * refine cpu softmax bwd test=develop * fix batch size test=develop * fix compile issue with gpu test=develop * add value clip
-
- 23 5月, 2019 5 次提交
-
-
由 tensor-tang 提交于
* refine softmax fwd test=develop * fix compile issue wih gpu test=develop * add value clip to avoid exp
-
由 mozga-intel 提交于
-
由 jerrywgz 提交于
* fix concat op vartype check, test=develop
-
由 Qiao Longfei 提交于
Async exe support communicator
-
由 mozga-intel 提交于
-
- 22 5月, 2019 4 次提交
-
-
由 Krzysztof Binias 提交于
test=develop
-
由 Sevin F. Varoglu 提交于
* add increment op to ngraph engine test=develop * fix style errors test=develop
-
由 baojun 提交于
-
由 guomingz 提交于
* Relu6 is the bottleneck op for Mobilenet-v2. As the mkldnn supports the conv/relu6 fusion, we implement it fusion via cpass way. Due to the int8 enabling for this fusion will be supported in MKLDNN v0.20, so this PR is focused on the fp32 optimization. Below table shows the benchmark(FPS) which measured on skx-8180(28 cores) Batch size | with fusion | without fusion -- | -- | -- 1 | 214.7 | 53.4 50 | 1219.727 | 137.280 test=develop * Fix the format issue test=develop * Add the missing nolint comments. test=develop * Fix the typos. test=develop * Register the conv_brelu_mkldnn_fuse_pass for the MKLDNN engine. test=develop * Adjust the indentation. test=develop * Add the test_conv_brelu_mkldnn_fuse_pass case. test=develop * Slightly update the code per Baidu comments. Let the parameter definition embedded into the code. That's will make the code easy to understand. test=develop
-
- 21 5月, 2019 6 次提交
-
-
由 Yibing Liu 提交于
* Add LAMB optimizer * Expose LAMB Optimizer's APIs test=develop, test=document_preview * Cleanup code & doc test=develop, test=document_preview * Update lamb optimizer's formula test=develop
-
由 mozga-intel 提交于
-
由 Tao Luo 提交于
test=develop
-
由 mozga-intel 提交于
-
由 liuwei1031 提交于
http://newicafe.baidu.com:80/issue/PaddleSec-33/show?from=page http://newicafe.baidu.com:80/issue/PaddleSec-28/show?from=page http://newicafe.baidu.com:80/issue/PaddleSec-25/show?from=page http://newicafe.baidu.com:80/issue/PaddleSec-24/show?from=page http://newicafe.baidu.com:80/issue/PaddleSec-21/show?from=page http://newicafe.baidu.com:80/issue/PaddleSec-20/show?from=page test=develop
-
由 Zhaolong Xing 提交于
* add quant_dequant_moving_avg_max_abs op test=develop * add more note for quantdequant op test=develop
-