1. 16 4月, 2019 1 次提交
  2. 27 3月, 2019 1 次提交
    • L
      Memory optimize (#16410) · 8d22bc17
      liuwei1031 提交于
      * fix cdn issue, test=develop
      
      * fix memory optimize bugs, test=develop
      
      * fix memory optimize bugs, test=develop
      
      * remove add/sub_2 op, test=develop
      
      * disable memory_optimize by default, test=develop
      
      * disable inplace activation in python, test=develop
      
      * fix unittests, test=develop
      
      * fix unittests, test=develop
      
      * bug-fix, test=develop
      8d22bc17
  3. 26 3月, 2019 2 次提交
  4. 15 3月, 2019 1 次提交
    • Q
      Support sync batch norm. (#16121) · 8ad672a2
      qingqing01 提交于
      * Support Sync Batch Norm.
      * Note, do not enable it in one device.
      
      Usage:
      
      build_strategy = fluid.BuildStrategy()
      build_strategy.sync_batch_norm = True
      binary = fluid.compiler.CompiledProgram(tp).with_data_parallel(
              loss_name=loss_mean.name,
              build_strategy=build_strategy)
      8ad672a2
  5. 31 1月, 2019 1 次提交
  6. 21 1月, 2019 1 次提交
  7. 12 12月, 2018 1 次提交
  8. 29 11月, 2018 1 次提交
  9. 15 11月, 2018 1 次提交
    • S
      add mkldnn prop_kind phase for inference-only case to pooling and activations (#14278) · 8a1eeec5
      Sylwester Fraczek 提交于
      * add is_test to pooling and activations
      
      add prop_kind support for layers activation. conv and pooling
      
      add a pass that sets is_test to true
      
      add transpiler version of is_test pass
      
      test=develop
      
      * patch test and pass
      
      test=develop
      
      * add pass to analyzer.h
      
      test=develop
      
      * add is_test attr description & pass only on mkldnn
      
      in:
      activation_op.cc
      batch_norm_op.cc
      conv_op.cc
      dropout_op.cc
      lrn_op.cc
      pool_op.cc
      sequence_pool_op.cc
      softmax_op.cc
      
      * fix is_test handling for activation pool and conv
      
      * change description of is_test for all layers again
      
      * remove GetAttr(use_mkldnn) from pass
      
      * rename correct_mkldnn_test_phase to is_test
      
      and remove dependency on MKLDNN
      test=develop
      
      * review fix magic number
      
      * two if(..)s into one
      
      * Check is_test once and pass mkldnn forward prop kind
      
      * dereference shared_ptr with * (without get())
      
      test=develop
      
      * add is_test_pass back
      
      test=develop
      8a1eeec5
  10. 09 11月, 2018 1 次提交
    • C
      Add InferVarType for some op (#14201) · 6c6e6385
      chengduo 提交于
      * add_infer_var_type
      test=develop
      
      * InferVarTypeHelper-> VarTypeInferenceHelper
      test=develop
      
      * PassInputTypeAndDTypeOnOutput
       test=develop
      
      * follow comment
      test=develop
      6c6e6385
  11. 22 10月, 2018 1 次提交
  12. 23 8月, 2018 2 次提交
  13. 22 8月, 2018 1 次提交
  14. 10 7月, 2018 1 次提交
  15. 27 6月, 2018 1 次提交
    • P
      bnorm+relu fuse for mkldnn (inference) (#11434) · 9a15c923
      pzelazko-intel 提交于
      * bnorm+relu fuse for mkldnn
      
      * separate fuse_relu function
      
      * bug fix
      
      * proper while range in inference_transpiler
      
      * description fix
      
      * review fix
      
      * review fix
      
      * unit test for fwd batch norm+relu MKLDNN fuse
      9a15c923
  16. 20 6月, 2018 1 次提交
  17. 11 6月, 2018 2 次提交
  18. 07 6月, 2018 1 次提交
    • M
      Mkldnn layout (#11040) · 3ff9ba0e
      mozga-intel 提交于
      * Add MKLDNN layout support in Paddle
      
      Add MKLDNN layout in Paddle so that MKLDNN friendly memory layout
      can be used in MKLDNN enabled OP kernel. Before this commit, NCHW
      is hardcode to be used in all MKLDNN op kernels. As a result,
      non-optimized execution path is selected in MKLDNN primitive which
      bring worse performance.
      Besides framework change, three MKLDNN OP kernels were updated
      for using new MKLDNN layout. They are conv/pool2d/batch_norm.
      Other MKLDNN OP kernels need be also updated in similar way to
      achieve best performance.
      
      * Add MKLDNN layout support in activation OP
      
      * Don't populate layout from input to output when kMKLDNN in
      
      * Refine pool mkldnn op kernel
      
      * MKLDNN layout
      
      * Remove the inferitance from tensor file
      
      * MKLDNN layout: refactoring
      
      * Remove additional #define to register new operator
      
      * Prepare mkldnn tests to work with layout
      3ff9ba0e
  19. 08 5月, 2018 1 次提交
    • Y
      Clean OpProtoAndCheckerMaker · 0e78cb69
      Yu Yang 提交于
      Do not use ctor
      
      * Reduce line of codes.
      * We can use virtual function for Maker now.
      * The implementation does not care what maker holds, it is easier to
      refactor later.
      0e78cb69
  20. 03 5月, 2018 1 次提交
    • T
      MKLDNN implementation of batch normalization (#9904) · 4a497b82
      Tomasz Patejko 提交于
      * Initial implementation of forward pass for MKLDNN batch norm
      
      * Added attributes for MKLDNN batch norm
      
      * MKLDNN batch norm forward pass passes unittest. Started working on backward
      
      * Backward pass for MKLDNN batch norm added
      
      * MKLDNN batch norm: scoring added to forward pass
      
      * MKLDNN batch norm: bias as input added; handling AnyLayout when kernel is looked up
      
      * MKLDNN batch norm: python unit tests added; mkldnn tests removed
      
      * MKLDNN batch norm: changes required by cpplint
      
      * MKLDNN batch norm: refactoring the operator
      
      * MKLDNN batch norm: saved variance inversed in backward pass for correct execution of MKLDNN unit tests
      
      * MKLDNN batch norm: refctoring, function for static/const cast to void* added
      
      * MKLDNN batch norm: remove AnyLayout from batch norm
      
      *  MKLDNN batch norm: only NCHW format is supported. Unittests refactored
      
      * MKDNN batch norm: use_mkldnn added to attributes
      
      * MKLDNN batch norm: AnyLayout removed from unittest
      
      * MKLDNN batch norm: added CUDNN defines to batch norm
      
      * MKLDNN batch norm: undefined data_format variable corrected
      
      * MKLDNN batch norm: use_cudnn added, use of setUp method for configuring attributes
      
      * MKLDNN batch norm: added use_cudnn attribute to batch norm operator
      
      * MKLDNN batch norm: correcting batch norm unit tests for MKLDNN
      
      * MKLDNN batch norm: MKLDNN tests moved to another file; reverting changes for saved variance not being inverted
      
      * Change default layout to NCHW
      
      * MKLDNN batch norm: init_kernel_type method added to unit tests
      
      * MKLDNN batch norm: style changes
      
      * MKLDNN batch norm: unit tests refactored
      
      * MKLDNN batch norm: added use_mkldnn attribute to batch norm python interface
      4a497b82
  21. 02 5月, 2018 1 次提交
  22. 11 4月, 2018 1 次提交
  23. 21 3月, 2018 1 次提交
  24. 19 3月, 2018 2 次提交
  25. 12 2月, 2018 1 次提交
  26. 10 2月, 2018 2 次提交
  27. 08 1月, 2018 1 次提交
    • Q
      cpu gpu transform function (#7191) · 0f353ab4
      Qiao Longfei 提交于
      * add rename guard
      
      * add device_data_transform
      
      * add device_data_transform_test
      
      * modify GetExpectedKernelType
      
      * update operator.run
      
      * support test test_label_semantic_roles
      
      * optimize code
      
      * optimize code
      
      * rename GetActualKernelType to GetExpectedKernelType
      
      * fix chunk_eval_op and device_data_transform_test
      
      * add is_same_place to place
      
      * optimize code, refine rename_guard
      
      * refine rename guard, add GetKernelTypeForVar
      
      * optimize code
      
      * add some log
      
      * rename guard
      
      * use sub scope to create var
      
      * fix compile
      
      * add IsInitialized for Tensor
      
      * add VarIsTensor
      
      * fix op_registry_test
      
      * test
      
      * tmp disable priority
      
      * restore switch_kernel.md
      
      * code clean
      0f353ab4
  28. 04 1月, 2018 1 次提交
  29. 26 12月, 2017 1 次提交
  30. 25 12月, 2017 1 次提交
    • Q
      Impl kernel hint (#6883) · af0c4c45
      Qiao Longfei 提交于
      * init kernel hint
      
      * fix typo
      
      * rm unused code
      
      * add include in op_kernel.h
      
      * restore op_kernel since it will be moved to op_kernel_type
      
      * change force_cpu to use_cpu
      
      * fix compilation
      af0c4c45
  31. 22 12月, 2017 1 次提交
  32. 20 12月, 2017 1 次提交
  33. 12 12月, 2017 1 次提交
    • Q
      Refine device context (#6433) · 61ec0b95
      QI JUN 提交于
      There are mainly following fixes:
      
      - take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
      - remove `eigen_device` interface in base class  `DeviceContext`
      - remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
      - remove unused `platform::EigenDeviceConverter`
      - rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
      - rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
      61ec0b95
  34. 28 11月, 2017 1 次提交
  35. 08 11月, 2017 1 次提交