- 25 5月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
* fluid int8 train and trt int8 predict align. trt int8 predict init op converter * 2. align fluid int8 train and trt int8 inference. enhance quant dequant fuse pass enhance op converter, trt engine, trt engine op, trt subgraph pass. * 3. add delete_quant_dequant_pass for trt test=develop * 4. add the missing file test=develop * 5. i modify the c++ interface, but forget to modify the pybind code fix the IS_TRT_VERSION_GE bug, and fix elementwise op converter test=develop
-
- 24 5月, 2019 2 次提交
-
-
由 Michał Gallus 提交于
* fuse mul and elementwise add to fc * Reimplement the FC forward operator * Fix FC MKLDNN integration by transposing weights * Add FC MKLDNN Pass test=develop * FC MKLDNN Pass: change memcpy to std::copy * Fix MKLDNN FC handling of mismatch input and weights dims * Lower tolerance for MKL-DNN in resnet50 test test=develop * Adjust FC to support MKLDNN Op placement test=develop * Adjust Placement Op to set use_mkldnn attribute for graph test=develop * MKLDNN FC: fix weights format so that gemm version is called test=develop * FC MKLDNN: Remove tolerance decrease from tester_helper * FC MKL-DNN: Refactor the code, change input reorder to weight reorder * MKL-DNN FC: Introduce operator caching test=develop * FC MKL-DNN: Fix the tensor type in ExpectedKernelType test=develop * FC MKL-DNN: fix style changes test=develop * FC MKL-DNN: fallback to native on non-supported dim sizes test=develop * FC MKLDNN: fix CMake paths test=develop * FC MKLDNN: Refine placement pass graph mkldnn attribute test=develop * Fix Transpiler error for fuse_conv_eltwise test=develop * Fix missing STL includes in files test=develop * FC MKL-DNN: Enable new output size computation Also, refine pass to comply with newest interface. test=develop * FC MKL-DNN: enable only when fc_mkldnn_pass is enabled * FC MKL-DNN: Allow Weights to use oi or io format * FC MKL-DNN: Adjust UT to work with correct dims test=develop * Enable MKL DEBUG for resnet50 analyzer test=develop * FC MKL-DNN: Improve Hashing function test=develop * FC MKL-DNN: Fix shape for fc weights in transpiler * FC MKL-DNN: Update input pointer in re-used fc primitive * Add log for not handling fc fuse for unsupported dims test=develop * FC MKL-DNN: Move transpose from pass to Op Kernel test=develop * FC MKL-DNN: Disable transpose in unit test test=develop * FC MKL-DNN: Remove fc_mkldnn_pass from default list * Correct Flag for fake data analyzer tests test=develop * FC MKL-DNN: Add comment about fc mkldnn pass disablement test=develop * FC MKL-DNN: Disable fc in int8 tests test=develop
-
由 Sylwester Fraczek 提交于
* add conv_concat_relu fuse test=develop * add test code test=develop * added missing include with unordered_map test=develop * review fixes for wojtuss test=develop * remove 'should (not) be fused' comment statements one of them was invalid anyway test=develop
-
- 22 5月, 2019 1 次提交
-
-
由 guomingz 提交于
* Relu6 is the bottleneck op for Mobilenet-v2. As the mkldnn supports the conv/relu6 fusion, we implement it fusion via cpass way. Due to the int8 enabling for this fusion will be supported in MKLDNN v0.20, so this PR is focused on the fp32 optimization. Below table shows the benchmark(FPS) which measured on skx-8180(28 cores) Batch size | with fusion | without fusion -- | -- | -- 1 | 214.7 | 53.4 50 | 1219.727 | 137.280 test=develop * Fix the format issue test=develop * Add the missing nolint comments. test=develop * Fix the typos. test=develop * Register the conv_brelu_mkldnn_fuse_pass for the MKLDNN engine. test=develop * Adjust the indentation. test=develop * Add the test_conv_brelu_mkldnn_fuse_pass case. test=develop * Slightly update the code per Baidu comments. Let the parameter definition embedded into the code. That's will make the code easy to understand. test=develop
-
- 20 5月, 2019 1 次提交
-
-
由 Tao Luo 提交于
test=develop
-
- 08 5月, 2019 1 次提交
-
-
由 chengduo 提交于
* move pass to ir * polish code test=develop * fix dependency test=develop
-
- 07 5月, 2019 1 次提交
-
-
由 石晓伟 提交于
* cherry-pick commit from 88770542 * cherry-pick commit from 3f0b97df * cherry-pick from 16691:Anakin subgraph support yolo_v3 and faster-rcnn (cherry picked from commit 8643dbc2) * Cherry-Pick from 16662 : Anakin subgraph cpu support (cherry picked from commit 7ad182e1) * Cherry-pick from 1662, 16797.. : add anakin int8 support (cherry picked from commit e14ab180) * Cherry-pick from 16813 : change singleton to graph RegistBlock test=release/1.4 (cherry picked from commit 4b9fa423) * Cherry Pick : 16837 Support ShuffleNet and MobileNet-v2 Support ShuffleNet and MobileNet-v2, test=release/1.4 (cherry picked from commit a6fb066f) * Cherry-pick : anakin subgraph add opt config layout argument #16846 test=release/1.4 (cherry picked from commit 8121b3ec) * 1. add shuffle_channel_detect (cherry picked from commit 6efdea89) * update shuffle_channel op convert, test=release/1.4 (cherry picked from commit e4726a06) * Modify symbol export rules test=develop
-
- 28 3月, 2019 1 次提交
-
-
由 nhzlx 提交于
refine trt first run add quant dequant fuse pass omit simplify_anakin_priorbox_detection template omit transpose_flatten_concat_fuse template test=develop
-
- 25 3月, 2019 1 次提交
-
-
由 Wojciech Uss 提交于
test=develop
-
- 21 3月, 2019 1 次提交
-
-
由 luotao1 提交于
test=develop
-
- 20 3月, 2019 3 次提交
- 19 3月, 2019 4 次提交
-
-
由 luotao1 提交于
test=develop
-
由 zhhsplendid 提交于
test=develop
-
由 Tao Luo 提交于
-
由 Wojciech Uss 提交于
* Add cpu_quantize_placement_pass for C-API quantization test=develop * added a comment on required pass attributes test=develop
-
- 18 3月, 2019 1 次提交
-
-
由 Wojciech Uss 提交于
* Add cpu_quantize_pass for C-API quantization test=develop * add cpu_quantize_pass test * fix lint: add include memory unorderd_map and unordered_set test=develop * fuse_relu 1 test=develop * tuned 2 without squash * fixes test=develop * remove unused vars test=develop * refactored test=develop * fix lint c-style cast -> C++ style cast test=develop * remove QuantMax and c style casts test=develop * last usage of QuantMax removed test=develop * Fix Analysis Predictor UT Check if memory_optimize_pass has already been added to the analysis config before adding a new one, so that it is not added multiple times. test=develop * change map to unordered_map fix the forgotten part of cpu_quantize_pass_tester.cc test=develop * removed quantized attribute * fixed cpu_quantize_pass_tester and op attr comments test=develop * removed redundant line test=debug * removed gmock test=develop * fix after merge
-
- 16 3月, 2019 1 次提交
-
-
由 qingqing01 提交于
test=develop
-
- 15 3月, 2019 1 次提交
-
-
由 qingqing01 提交于
* Support Sync Batch Norm. * Note, do not enable it in one device. Usage: build_strategy = fluid.BuildStrategy() build_strategy.sync_batch_norm = True binary = fluid.compiler.CompiledProgram(tp).with_data_parallel( loss_name=loss_mean.name, build_strategy=build_strategy)
-
- 14 3月, 2019 1 次提交
-
-
由 Wojciech Uss 提交于
* Add cpu_quantize_squash_pass for C-API quantization test=develop * add cpu_quantize_squash_pass teste * fix lint: add include memory unorderd_map and unordered_set test=develop * lint fix 2 * fixes test=develop * refactored test=develop * fix windows ci test=develop
-
- 13 3月, 2019 1 次提交
-
-
由 luotao1 提交于
test=develop
-
- 26 2月, 2019 1 次提交
-
-
由 Krzysztof Binias 提交于
test=develop
-
- 22 2月, 2019 1 次提交
-
-
由 Michał Gallus 提交于
* MKL-DNN: Add test for conv bias fuse pass test=develop * Remove const cast from Conv Bias Pass Test * Add conv with bias test case for conv+bias fuse ut test=develop
-
- 31 1月, 2019 1 次提交
-
-
由 Yan Chunwei 提交于
-
- 29 1月, 2019 1 次提交
-
-
由 Krzysztof Binias 提交于
test=develop
-
- 21 1月, 2019 1 次提交
-
-
由 Dun 提交于
* mem opt * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * test=develop * refine code test=develop * refine code test=develop * refine code test=develop * refine code test=develop * refine with cub test=develop * fix mkldnn test && remove comments && test=develop * polish code && test=develop * add only_forward test && test=develop
-
- 14 1月, 2019 1 次提交
-
-
由 tensor-tang 提交于
-
- 13 1月, 2019 1 次提交
-
-
由 tensor-tang 提交于
-
- 11 1月, 2019 1 次提交
-
-
由 Zhaolong Xing 提交于
-
- 10 1月, 2019 1 次提交
-
-
由 tensor-tang 提交于
test=develop
-
- 08 1月, 2019 1 次提交
-
-
由 tensor-tang 提交于
test=develop
-
- 07 1月, 2019 1 次提交
-
-
由 minqiyang 提交于
test=develop
-
- 25 12月, 2018 1 次提交
-
-
由 nhzlx 提交于
fix conv+elemenwise fuse bug.
-
- 16 12月, 2018 1 次提交
-
-
由 nhzlx 提交于
test=develop
-
- 14 12月, 2018 1 次提交
-
-
由 Yan Chunwei 提交于
-
- 07 12月, 2018 1 次提交
-
-
由 Yihua Xu 提交于
test=develop
-
- 03 12月, 2018 1 次提交
-
-
由 Yihua Xu 提交于
(test=develop)
-
- 15 11月, 2018 1 次提交
-
-
由 Sylwester Fraczek 提交于
* add is_test to pooling and activations add prop_kind support for layers activation. conv and pooling add a pass that sets is_test to true add transpiler version of is_test pass test=develop * patch test and pass test=develop * add pass to analyzer.h test=develop * add is_test attr description & pass only on mkldnn in: activation_op.cc batch_norm_op.cc conv_op.cc dropout_op.cc lrn_op.cc pool_op.cc sequence_pool_op.cc softmax_op.cc * fix is_test handling for activation pool and conv * change description of is_test for all layers again * remove GetAttr(use_mkldnn) from pass * rename correct_mkldnn_test_phase to is_test and remove dependency on MKLDNN test=develop * review fix magic number * two if(..)s into one * Check is_test once and pass mkldnn forward prop kind * dereference shared_ptr with * (without get()) test=develop * add is_test_pass back test=develop
-
- 14 11月, 2018 1 次提交
-
-
由 Yan Chunwei 提交于
-