1. 24 10月, 2019 1 次提交
  2. 13 10月, 2019 1 次提交
    • Z
      Add Multihead matmul fuse pass (#20167) · b8333ede
      zhaoyuchen2018 提交于
      * Add multihead fuse pass for ernie opt
      
      * Refine softmax
      
      test=develop
      
      * Refine cuda kernel
      
      * Refine cuda version
      
      * Refine cmake
      
      test=develop
      
      * refine header file
      
      * refine test case and pass
      * refine comments
      b8333ede
  3. 12 10月, 2019 1 次提交
  4. 27 9月, 2019 1 次提交
  5. 19 9月, 2019 1 次提交
    • Y
      Add a pass to fuse fc+elementwise_add+layernorm (#19776) · 3cd985a6
      Yiqun Liu 提交于
      * Add fc_elementwise_layernorm_fuse pass and unittest.
      
      * Add fused_fc_elementwise_layernorm op and its GPU kernel.
      test=develop
      
      * Apply fc_elementwise_layernorm_fuse_pass to GPU inference.
      
      * Add the setting of attrs in the definition of binary_op.
      test=develop
      
      * Add comment.
      
      * Implement the unittest.
      test=develop
      
      * Change the unittest name of layer_norm.
      test=develop
      3cd985a6
  6. 16 9月, 2019 1 次提交
    • Y
      Enhance fc_fuse_pass to enable fusing relu to fc_op (#19733) · c67c8758
      Yiqun Liu 提交于
      * Refine the codes related to fc op.
      
      * Add GPU implementation for fc functor.
      
      * Apply fc_fuse_pass in GPU inference.
      test=develop
      
      * Change the cmake for fc op.
      
      * Change PADDLE_ENFORCE to PADDLE_ENFORCE_EQ.
      
      * Add an attribute to set the activation type in fc_op.
      
      * Enhance the unittest of fc_op.
      test=develop
      
      * Remove the declaration of FCOpGrad back to the header file.
      test=develop
      
      * Set default value for newly added arguments in test_fc_op.
      test=develop
      
      * Enhance fc_fuse_pass to enable fusing relu.
      
      * Allow print the shapes of var_desc in graph.
      test=develop
      
      * Enhance fc_fuse_pass_tester.
      
      * Remove the use of PADDLE_ENFORCE.
      test=develop
      
      * Correct the number of ops after fusing.
      test=develop
      
      * Fix a typo.
      test=develop
      
      * Set activation_type to null when there is no relu in fc.
      test=develop
      
      * Refine fc_fuse_pass's codes.
      
      * Enable the set of shape for tensor.
      
      * Refine repeated_fc_relu_pass and add unittest.
      test=develop
      c67c8758
  7. 06 9月, 2019 1 次提交
  8. 03 9月, 2019 1 次提交
    • Y
      A a pass to enable the use of cudnn (#19346) · c5548178
      Yiqun Liu 提交于
      * Add a interface to enable cudnn for inference.
      
      * Add cudnn_placement_pass.
      test=develop
      
      * Set the default value of cudnn_enabled_op_types to null.
      test=develop
      
      * Write the common basic class, placement_pass_base, to refine the codes.
      test=develop
      
      * Call EnableCUDNN in unittest.
      test=develop
      
      * Refine cudnn_placement_pass tester.
      
      * Enable the testing of cudnn_placement_pass in inference's unittest.
      test=develop
      
      * Add the check of op kernels.
      test=develop
      c5548178
  9. 30 8月, 2019 1 次提交
    • Y
      Add a pass to replace dropout_op with scale_op when is_test is true (#19297) · fcec365d
      Yiqun Liu 提交于
      * Add simplify_with_basic_ops_pass to replace dropout_op with scale_op when is_test is true.
      test=develop
      
      * Delete dropout_op directly when upscale_in_train is true.
      test=develop
      
      * Improve the debug string, adding the print of op_desc information.
      
      * Fix the case when dropout's input x is reused as the next op's output.
      
      * Add the pass to inference.
      test=develop
      
      * Change the log level.
      test=develop
      
      * Add unittest for inplace case.
      
      * Add comment to explain the pass.
      
      * Apply the pass for CPU inference.
      test=develop
      
      * Fix the typo.
      test=develop
      
      * Add the check of AttrType.
      test=develop
      fcec365d
  10. 19 8月, 2019 1 次提交
  11. 15 8月, 2019 1 次提交
  12. 02 8月, 2019 1 次提交
  13. 23 7月, 2019 1 次提交
  14. 11 6月, 2019 1 次提交
    • Update the Anakin interfaces for content-dnn and MLU (#17890) · bce259e5
      石晓伟 提交于
      * update anakin-engine interfaces for content-dnn
      
      test=develop
      
      * support only-gpu mode of Anakin
      
      modify eltwise parse
      
      test=develop
      
      * modification for thread-safe
      
      test=develop
      
      * Integrated template instance
      
      test=develop
      
      * increase template parameters
      
      test=develop
      
      * support MLU predictor
      
      test=develop
      
      * update anakin cmake files
      
      test=develop
      
      * update TargetWrapper::set_device
      
      * update the initialization of anakin subgraph
      
      test=develop
      
      * use the default constructor of base class
      
      test=develop
      bce259e5
  15. 29 5月, 2019 1 次提交
  16. 25 5月, 2019 1 次提交
    • Z
      TRT: Support set dynamic range in int8 mode. (#17524) · 61221ebc
      Zhaolong Xing 提交于
      * fluid int8 train and trt int8 predict align.
      trt int8 predict init
      op converter
      
      * 2. align fluid int8 train and trt int8 inference.
      enhance quant dequant fuse pass
      enhance op converter, trt engine, trt engine op, trt subgraph pass.
      
      * 3. add delete_quant_dequant_pass for trt
      
      test=develop
      
      * 4. add the missing file
      test=develop
      
      * 5. i modify the c++ interface, but forget to modify the pybind code
      fix the IS_TRT_VERSION_GE bug, and fix elementwise op converter
      test=develop
      61221ebc
  17. 24 5月, 2019 2 次提交
    • M
      [MKL-DNN] Add Fully Connected Op for inference only(#15226) · 0c39b97b
      Michał Gallus 提交于
      * fuse mul and elementwise add to fc
      
      * Reimplement the FC forward operator
      
      * Fix FC MKLDNN integration by transposing weights
      
      * Add FC MKLDNN Pass
      
      test=develop
      
      * FC MKLDNN Pass: change memcpy to std::copy
      
      * Fix MKLDNN FC handling of mismatch input and weights dims
      
      * Lower tolerance for MKL-DNN in resnet50 test
      
      test=develop
      
      * Adjust FC to support MKLDNN Op placement
      
      test=develop
      
      * Adjust Placement Op to set use_mkldnn attribute for graph
      
      test=develop
      
      * MKLDNN FC: fix weights format so that gemm version is called
      
      test=develop
      
      * FC MKLDNN: Remove tolerance decrease from tester_helper
      
      * FC MKL-DNN: Refactor the code, change input reorder to weight reorder
      
      * MKL-DNN FC: Introduce operator caching
      
      test=develop
      
      * FC MKL-DNN: Fix the tensor type in ExpectedKernelType
      
      test=develop
      
      * FC MKL-DNN: fix style changes
      
      test=develop
      
      * FC MKL-DNN: fallback to native on non-supported dim sizes
      
      test=develop
      
      * FC MKLDNN: fix CMake paths
      
      test=develop
      
      * FC MKLDNN: Refine placement pass graph mkldnn attribute
      
      test=develop
      
      * Fix Transpiler error for fuse_conv_eltwise
      
      test=develop
      
      * Fix missing STL includes in files
      
      test=develop
      
      * FC MKL-DNN: Enable new output size computation
      
      Also, refine pass to comply with newest interface.
      test=develop
      
      * FC MKL-DNN: enable only when fc_mkldnn_pass is enabled
      
      * FC MKL-DNN: Allow Weights to use oi or io format
      
      * FC MKL-DNN: Adjust UT to work with correct dims
      
      test=develop
      
      * Enable MKL DEBUG for resnet50 analyzer
      
      test=develop
      
      * FC MKL-DNN: Improve Hashing function
      
      test=develop
      
      * FC MKL-DNN: Fix shape for fc weights in transpiler
      
      * FC MKL-DNN: Update input pointer in re-used fc primitive
      
      * Add log for not handling fc fuse for unsupported dims
      
      test=develop
      
      * FC MKL-DNN: Move transpose from pass to Op Kernel
      
      test=develop
      
      * FC MKL-DNN: Disable transpose in unit test
      
      test=develop
      
      * FC MKL-DNN: Remove fc_mkldnn_pass from default list
      
      * Correct Flag for fake data analyzer tests
      
      test=develop
      
      * FC MKL-DNN: Add comment about fc mkldnn pass disablement
      
      test=develop
      
      * FC MKL-DNN: Disable fc in int8 tests
      
      test=develop
      0c39b97b
    • S
      Conv concat relu quantization (#17466) · 5b2a3c4b
      Sylwester Fraczek 提交于
      * add conv_concat_relu fuse
      
      test=develop
      
      * add test code
      
      test=develop
      
      * added missing include with unordered_map
      
      test=develop
      
      * review fixes for wojtuss
      
      test=develop
      
      * remove 'should (not) be fused' comment statements
      
      one of them was invalid anyway
      
      test=develop
      5b2a3c4b
  18. 22 5月, 2019 1 次提交
    • G
      Enable the convolution/relu6(bounded_relu) fusion for FP32 on Intel platform. (#17130) · 2281ebf0
      guomingz 提交于
      * Relu6 is the bottleneck op for Mobilenet-v2. As the mkldnn supports the conv/relu6 fusion, we implement it fusion via cpass way. Due to the int8 enabling for this fusion will be supported in MKLDNN v0.20, so this PR is focused on the fp32 optimization.
      
      Below table shows the benchmark(FPS) which measured on skx-8180(28 cores)
      Batch size | with fusion | without fusion
      -- | -- | --
      1 | 214.7 | 53.4
      50 | 1219.727 | 137.280
      
      test=develop
      
      * Fix the format issue
      
      test=develop
      
      * Add the missing nolint comments.
      
      test=develop
      
      * Fix the typos.
      
      test=develop
      
      * Register the conv_brelu_mkldnn_fuse_pass for the MKLDNN engine.
      
      test=develop
      
      * Adjust the indentation.
      
      test=develop
      
      * Add the test_conv_brelu_mkldnn_fuse_pass case.
      
      test=develop
      
      * Slightly update the code per Baidu comments.
      Let the parameter definition embedded into the code.
      That's will make the code easy to understand.
      
      test=develop
      2281ebf0
  19. 20 5月, 2019 1 次提交
  20. 08 5月, 2019 1 次提交
  21. 07 5月, 2019 1 次提交
    • Cherry-pick benchmark related changes from release/1.4 (#17156) · a72dbe9a
      石晓伟 提交于
      * cherry-pick commit from 88770542
      
      * cherry-pick commit from 3f0b97df
      
      * cherry-pick from 16691:Anakin subgraph support yolo_v3 and faster-rcnn
      
      (cherry picked from commit 8643dbc2)
      
      * Cherry-Pick from 16662 : Anakin subgraph cpu support
      
      (cherry picked from commit 7ad182e1)
      
      * Cherry-pick from 1662, 16797.. : add anakin int8 support
      
      (cherry picked from commit e14ab180)
      
      * Cherry-pick from 16813 : change singleton to graph RegistBlock
      test=release/1.4
      
      (cherry picked from commit 4b9fa423)
      
      * Cherry Pick : 16837 Support ShuffleNet and MobileNet-v2
      
      Support ShuffleNet and MobileNet-v2, test=release/1.4
      
      (cherry picked from commit a6fb066f)
      
      * Cherry-pick : anakin subgraph add opt config layout argument #16846
      test=release/1.4
      
      (cherry picked from commit 8121b3ec)
      
      * 1. add shuffle_channel_detect
      
      (cherry picked from commit 6efdea89)
      
      * update shuffle_channel op convert, test=release/1.4
      
      (cherry picked from commit e4726a06)
      
      * Modify symbol export rules
      
      test=develop
      a72dbe9a
  22. 28 3月, 2019 1 次提交
    • N
      Anakin ssd support · d065b5bf
      nhzlx 提交于
      refine trt first run
      add quant dequant fuse pass
      omit simplify_anakin_priorbox_detection template
      omit transpose_flatten_concat_fuse template
      test=develop
      d065b5bf
  23. 25 3月, 2019 1 次提交
  24. 21 3月, 2019 1 次提交
  25. 20 3月, 2019 3 次提交
  26. 19 3月, 2019 4 次提交
  27. 18 3月, 2019 1 次提交
    • W
      Add cpu_quantize_pass for C-API quantization (#16127) · 2579ade4
      Wojciech Uss 提交于
      * Add cpu_quantize_pass for C-API quantization
      
      test=develop
      
      * add cpu_quantize_pass test
      
      * fix lint: add include memory unorderd_map and unordered_set
      
      test=develop
      
      * fuse_relu 1
      
      test=develop
      
      * tuned 2 without squash
      
      * fixes
      
      test=develop
      
      * remove unused vars
      
      test=develop
      
      * refactored
      
      test=develop
      
      * fix lint c-style cast -> C++ style cast
      
      test=develop
      
      * remove QuantMax and c style casts
      
      test=develop
      
      * last usage of QuantMax removed
      
      test=develop
      
      * Fix Analysis Predictor UT
      
      Check if memory_optimize_pass has already been added
      to the analysis config before adding a new one, so
      that it is not added multiple times.
      test=develop
      
      * change map to unordered_map
      
      fix the forgotten part of cpu_quantize_pass_tester.cc
      
      test=develop
      
      * removed quantized attribute
      
      * fixed cpu_quantize_pass_tester and op attr comments
      
      test=develop
      
      * removed redundant line
      
      test=debug
      
      * removed gmock
      
      test=develop
      
      * fix after merge
      2579ade4
  28. 16 3月, 2019 1 次提交
  29. 15 3月, 2019 1 次提交
    • Q
      Support sync batch norm. (#16121) · 8ad672a2
      qingqing01 提交于
      * Support Sync Batch Norm.
      * Note, do not enable it in one device.
      
      Usage:
      
      build_strategy = fluid.BuildStrategy()
      build_strategy.sync_batch_norm = True
      binary = fluid.compiler.CompiledProgram(tp).with_data_parallel(
              loss_name=loss_mean.name,
              build_strategy=build_strategy)
      8ad672a2
  30. 14 3月, 2019 1 次提交
    • W
      Add cpu_quantize_squash_pass for C-API quantization (#16128) · b9252f3d
      Wojciech Uss 提交于
      * Add cpu_quantize_squash_pass for C-API quantization
      
      test=develop
      
      * add cpu_quantize_squash_pass teste
      
      * fix lint: add include memory unorderd_map and unordered_set
      
      test=develop
      
      * lint fix 2
      
      * fixes
      
      test=develop
      
      * refactored
      
      test=develop
      
      * fix windows ci
      
      test=develop
      b9252f3d
  31. 13 3月, 2019 1 次提交
  32. 26 2月, 2019 1 次提交
  33. 22 2月, 2019 1 次提交
  34. 31 1月, 2019 1 次提交