1. 07 2月, 2020 1 次提交
    • Y
      Enable the detection of subgraph composed of grad ops (#21223) · dcfb6038
      Yiqun Liu 提交于
      * Add the first implememtation of fusion_group op #19621 (#3)
      
      * Add the dynamic load of nvrtc, and support runtime compiling of CUDA kernel using nvrtc.
      test=develop
      
      * Call CUDA driver api to launch the kernel compiled by nvrtc.
      test=develop
      
      * Disable for mac and windows.
      test=develop
      
      * Refine the codes to support manually specified num_threads and workload_per_thread.
      test=develop
      
      * Refine the CUDA kernel to support large dims.
      test=develop
      
      * Add DeviceCodePool to manage all device codes.
      
      * Add the first implementation fusion_group op.
      
      * Add unit-test for fusion_group op.
      
      * Add the check of result.
      
      * Add the check of nvrtc in unit-test.
      test=develop
      
      * Add comment to explain the inputs, outputs and features of fusion_group op.
      test=develop
      
      * Disable fusion_group op for mac and windows.
      test=develop
      
      * Make the compiling of device code return status instead of hanging up.
      test=develop
      
      * Add the check of whether there is CUDA driver library, and do not core dump when failing to call the CUDA driver API.
      
      * Unify fusion_group_op's input and output names.
      test=develop
      
      * Add the check of CUDA driver library in unittest.
      test=develop
      
      * Enable generating code for a given subgraph. #21126 (#4)
      
      * Enable generating code for a given subgraph.
      
      * Support sorting the subgraph.
      
      * Remove the rearange of expressions because we use the sorted subgraph directly.
      
      * Enable generating code for a subgraph which is composed of grad ops.
      
      * Use expression information to check the accuracy in unittest.
      
      * Separate load and store from computation expressions.
      test=develop
      
      * Improve the loading statements in generated codes.
      test=develop
      
      * Remove unused arguments from formal list.
      test=develop
      
      * Enable the detection of subgraph of grad ops.
      
      * Generate code for detected subgraph in fusion_group_pass.
      
      * Add an option in BuildStrategy to enable fusion_group_pass and add unittest.
      test=develop
      
      * Fix a bug when checking whether the shape of all inputs are the same.
      
      * Add debug information.
      
      * Remove subgraph_detector from inference/analysis to the common framework/ir directory. (#5)
      
      test=develop
      
      * Call subgraph_detector in fusion_group pass.
      test=develop
      
      * Disable fusion_group when WITH_GPU is OFF.
      test=develop
      
      * Refine all PADDLE_ENFORCE message.
      test=develop
      
      * Fix the case that some inputs are not defined in grad ops, and set op_role for fused op.
      test=develop
      
      * Follow review comments.
      test=develop
      dcfb6038
  2. 10 1月, 2020 1 次提交
    • Z
      Add bn and relu fuse pass (#22048) · 46189b16
      Zhen Wang 提交于
      * add bn and relu fuse pass
      
      * add op attr assert and dtype assert
      
      * fix some inputs&&outputs bugs for the fused op and pattern.
      
      * add the unittest for fuse_bn_act_pass. test=develop
      
      * use normative enforce statements. test=develop
      
      * add the cpu test. test=develop
      
      * add the support of batch_size=1 for the bn with relu op. test=develop
      
      * add the error type for paddle throws. test=develop
      
      * add fused_batch_norm_act and fused_batch_norm_act_grad to op_has_unsed_vars_white_list. test=develop
      46189b16
  3. 13 11月, 2019 1 次提交
  4. 16 9月, 2019 1 次提交
  5. 13 9月, 2019 1 次提交
    • C
      Open fuse all reduce option (#19765) · 056fdedd
      chengduo 提交于
      * Open fuse all reduce op
      test=develop
      
      * Add Fuse optimization op log
      
      * Add log in fuse_optimizer op pass and fuse all_reduce op pass
      
      * replace with boost::optional<bool>
      test=develop
      
      * Polish code
      test=develop
      
      * fix code coverage
      test=develop
      056fdedd
  6. 11 9月, 2019 2 次提交
  7. 04 9月, 2019 1 次提交
    • B
      Enable ngraph through build_strategy (#19266) · a3a4b6e5
      baojun 提交于
      * enable ngraph throught build_strategy test=develop
      
      * add unittest test=develop
      
      * put use_ngraph unconditional test=develop
      
      * remove paddle_enforce test=develop
      
      * remove paddle_enforce test=develop
      
      * fix copyright test=develop
      
      * limit for ngraph only test=develop
      a3a4b6e5
  8. 02 8月, 2019 1 次提交
  9. 29 7月, 2019 1 次提交
  10. 27 7月, 2019 1 次提交
  11. 26 7月, 2019 1 次提交
    • Z
      Feature/mem opt pass refactor (#18735) · a802da65
      Zeng Jinle 提交于
      * first version memory optimize pass, test=develop
      
      * remove move_tensor_sharing_pass, test=develop
      
      * refine code comments, add unittests, test=develop
      
      * turn off memory_optimize by default, test=develop
      
      * follow huihuang's comments, test=develop
      
      * follow chengduoZH's comments, test=develop
      
      * fix grammar error, add const qualifier, fix pass_test exception message, test=develop
      
      * follow chengduoZH's comments 2nd, test=develop
      a802da65
  12. 23 7月, 2019 1 次提交
  13. 11 7月, 2019 2 次提交
    • G
    • Z
      Feature/buffer_shared_inplace (#17911) · d3003a16
      Zeng Jinle 提交于
      * feature/buffer_shared_inplace, test=develop
      
      * refine code, test=develop
      
      * fix elementwise_add op cpu inplace and sum inplace bug, test=develop
      
      * add unittest and debug log, test=develop
      
      * fix parallel_executor scope bug, polish code, test=develop
      
      * fix sum op, activation op, single_in_place_inference bug, test=develop
      
      * remove kLocalExecScopeName, test=develop
      
      * fix unittest,test=develop
      
      * fix out_var first version bug, test=develop
      
      * follow comments,test=develop
      d3003a16
  14. 14 6月, 2019 1 次提交
  15. 06 6月, 2019 1 次提交
  16. 27 5月, 2019 1 次提交
  17. 20 5月, 2019 1 次提交
  18. 14 5月, 2019 1 次提交
  19. 08 5月, 2019 1 次提交
  20. 06 5月, 2019 1 次提交
  21. 23 4月, 2019 1 次提交
  22. 21 4月, 2019 1 次提交
    • Z
      Refine model gpu memory (#16993) · 1202d3fc
      Zeng Jinle 提交于
      * speedup gc and inplace softmax_with_cross_entropy_grad
      test=develop
      
      * refine models gpu mem
      Merge skip vars and warning messages of mem opt
      remove relu mem opt
      test=develop
      
      * follow comments
      test=develop
      1202d3fc
  23. 12 4月, 2019 1 次提交
  24. 11 4月, 2019 1 次提交
  25. 08 4月, 2019 2 次提交
  26. 03 4月, 2019 1 次提交
  27. 28 3月, 2019 2 次提交
  28. 22 3月, 2019 1 次提交
    • C
      [Speed]Refine ParallelExecutor (#16190) · a6a3b2fb
      chengduo 提交于
      * refine parallelExecutor
      test=develop
      
      * Polish op_handle
      test=develop
      
      * Remove unnecessary op_handle
      test=develop
      
      * Fix Travis CI
      test=develop
      
      * Fix fetch bug
      test=develop
      
      * Remove WaitInputVarGenerated
      
      * Fix OpHandleBase::Run
      test=develop
      
      * debug
      test=develop
      
      * use origin fetch_op_handle
      test=develop
      
      * Revert op_handle_base.cc
      test=develop
      
      * Polish code
      test=develop
      
      * Fix OpHandleBase::Run
      test=develop
      
      * code refine
      
      * test CI and CE
      test=develop
      
      * fix OpHandle::Run
      test=develop
      
      * refine AllReduceOpHandle
      test=develop
      
      * Polish code
      test=develop
      a6a3b2fb
  29. 20 3月, 2019 1 次提交
    • C
      Fuse AllReduce (#15921) · f26ba5bd
      chengduo 提交于
      * fuse all_reduce
      test=develop
      
      * add fuse_parameter_groups_size
      test=develop
      
      * Polish code
      test=develop
      
      * Fix travis-ci
      test=develop
      
      * Add SetGroupAccordingToLayers and SetGroupAccordingToGroupSize
      test=develop
      
      * Add SetGroupAccordingToMemorySize
      test=develop
      
      * fix multi_devices_graph
      test=develop
      
      * reset params_grads
      test=develop
      
      * Polish code
      test=develop
      f26ba5bd
  30. 15 3月, 2019 1 次提交
    • Q
      Support sync batch norm. (#16121) · 8ad672a2
      qingqing01 提交于
      * Support Sync Batch Norm.
      * Note, do not enable it in one device.
      
      Usage:
      
      build_strategy = fluid.BuildStrategy()
      build_strategy.sync_batch_norm = True
      binary = fluid.compiler.CompiledProgram(tp).with_data_parallel(
              loss_name=loss_mean.name,
              build_strategy=build_strategy)
      8ad672a2
  31. 07 3月, 2019 1 次提交
  32. 05 3月, 2019 1 次提交
  33. 23 2月, 2019 1 次提交
  34. 22 2月, 2019 2 次提交
  35. 21 2月, 2019 1 次提交