1. 23 7月, 2019 1 次提交
  2. 27 6月, 2019 1 次提交
    • H
      supports collective communicated training (#18175) · b7128bac
      HaoRen 提交于
      * fix prepare context redundant code problem, optimize executor by caching create_varaiables
      test=develop
      
      * supports collective training in executor
      
      * make fetch_list runable with variables, add more unittest for use_program_cache
      test=develop
      
      * fix comment
      test=develop
      
      * use unique name for nccl_id
      
      * supports output to stream in program_to_code
      
      * insert sync_comm_stream before regularization; add skip_op_callstack capability in program_to_code
      
      * set op role in collective training
      
      * add collective op role
      
      * remove orig file
      
      * add build optimizer by strategy
      
      * add collective strategy
      
      * refine collective strategy
      
      * add multi-process role maker
      
      * refine strategy building factory so that we can easily plugin more strategy
      
      * scale loss grad in collective sgd transpiler
      
      * add support for distributed fc
      
      * code format
      
      * revert some features for dist fc
      
      * add support for distributed fc training
      
      * fix prepare context redundant code problem, optimize executor by caching create_varaiables
      test=develop
      
      * supports collective training in executor
      
      * make fetch_list runable with variables, add more unittest for use_program_cache
      test=develop
      
      * use unique name for nccl_id
      
      * supports output to stream in program_to_code
      
      * insert sync_comm_stream before regularization; add skip_op_callstack capability in program_to_code
      
      * set op role in collective training
      
      * add collective op role
      
      * fix comment
      test=develop
      
      * remove orig file
      
      * add build optimizer by strategy
      
      * add collective strategy
      
      * refine collective strategy
      
      * add multi-process role maker
      
      * refine strategy building factory so that we can easily plugin more strategy
      
      * scale loss grad in collective sgd transpiler
      
      * add support for distributed fc
      
      * code format
      
      * revert some features for dist fc
      
      * add support for distributed fc training
      
      * test=develop
      add collective op unittest standard
      
      * test=develop
      remove the test_collective directory
      
      * test=develop
      remove the test_collective directory
      
      * remove slicegather test
      
      * code format for reducescatter
      
      * update attr of shard_index_op
      
      * Modify macro nccl_helper
      
      * remove test without distribute
      
      * macro collective_helper
      
      * marcro update
      
      * test=develop
      update support python3.5
      
      * test=develop change gpu memory use to 0.1 when test
      
      * test=develop
      update ut equal func
      
      * test=develop
      set flags to 1.5
      
      * test=develop fix pickle dumple  py35
      
      * test=develop
      fix divide in slice and add sync_comm_stream
      update atol and rtol to 1e-05
      rm shard_index op and test
      modify read input from file to read from memory
      remove origin_program in framework and add i/o in c_sync_calc_stream
      
      * test=develop update unittest sync operator I/O
      b7128bac
  3. 11 6月, 2019 1 次提交
    • Update the Anakin interfaces for content-dnn and MLU (#17890) · bce259e5
      石晓伟 提交于
      * update anakin-engine interfaces for content-dnn
      
      test=develop
      
      * support only-gpu mode of Anakin
      
      modify eltwise parse
      
      test=develop
      
      * modification for thread-safe
      
      test=develop
      
      * Integrated template instance
      
      test=develop
      
      * increase template parameters
      
      test=develop
      
      * support MLU predictor
      
      test=develop
      
      * update anakin cmake files
      
      test=develop
      
      * update TargetWrapper::set_device
      
      * update the initialization of anakin subgraph
      
      test=develop
      
      * use the default constructor of base class
      
      test=develop
      bce259e5
  4. 30 5月, 2019 1 次提交
  5. 17 5月, 2019 1 次提交
  6. 18 4月, 2019 1 次提交
  7. 28 3月, 2019 1 次提交
  8. 22 3月, 2019 1 次提交
  9. 20 3月, 2019 1 次提交
  10. 19 3月, 2019 1 次提交
  11. 16 3月, 2019 1 次提交
  12. 15 3月, 2019 1 次提交
    • Q
      Support sync batch norm. (#16121) · 8ad672a2
      qingqing01 提交于
      * Support Sync Batch Norm.
      * Note, do not enable it in one device.
      
      Usage:
      
      build_strategy = fluid.BuildStrategy()
      build_strategy.sync_batch_norm = True
      binary = fluid.compiler.CompiledProgram(tp).with_data_parallel(
              loss_name=loss_mean.name,
              build_strategy=build_strategy)
      8ad672a2
  13. 22 2月, 2019 1 次提交
  14. 30 1月, 2019 1 次提交
  15. 25 1月, 2019 1 次提交
    • B
      Adding ngraph_engine_op (#14948) · efce2567
      baojun 提交于
      * enable ngraph_engine_op
      test=develop
      
      * merge develop test=develop
      
      * avoid const_cast test=develop
      
      * rm ngraph_operator test=develop
      
      * Added TODO to move EnableNgraph test=develop
      
      * Add TODO to remove const_cast test=develop
      efce2567
  16. 24 1月, 2019 1 次提交
    • Y
      Add the CUDA kernel for beam_search op (#15020) · 3008fa12
      Yiqun Liu 提交于
      * Refine the beam_search op and test.
      
      * A basic CUDA implementation of beam_search for small batch_size.
      
      * Implement CUDA kernel for beam_search_op.
      
      * Use multiple CUDA threads in the same block to select the top beam.
      
      * Update the python api of beam_search op.
      
      * Enable extend function in CPU kernel of beam_search op.
      
      * Unify the CUDA codes.
      test=develop
      
      * Unify the CPU kernel of beam_search op.
      
      * Ensure the seletced items of beam_search_op's CPU kernel sorted by scores.
      
      * Update the description of beam_search in API.spec.
      
      * Enable the use of CUDA kernel in beam_search op.
      
      * Exclude the beam_search's CUDA unittest when there is no CUDA gpu, and delete some debuging statements.
      test=develop
      
      * Follow comments.
      test=develop
      
      * Call the CPU kernel for beam_search op when batch_size > 4.
      test=develop
      
      * Remove the except of is_empty op in PrepareData.
      test=develop
      3008fa12
  17. 18 1月, 2019 1 次提交
    • Z
      Tree conv op (#15217) · e2ba9668
      zhaozhehao 提交于
      * refactor tree2col operator with new memory mechanism test=develop
      
      * test=develop
      
      * test=develop
      
      * Modified API according to panyx0718 test=develop
      
      * fix API change according to heavengate test=develop
      
      * Modify API comment test=develop
      e2ba9668
  18. 29 12月, 2018 1 次提交
  19. 26 12月, 2018 1 次提交
  20. 18 12月, 2018 3 次提交
  21. 13 12月, 2018 1 次提交
    • S
      fix cmake · deb0d41c
      sneaxiy 提交于
      fix cmake again
      test=develop
      deb0d41c
  22. 10 12月, 2018 2 次提交
  23. 05 12月, 2018 1 次提交
  24. 03 12月, 2018 1 次提交
  25. 28 11月, 2018 1 次提交
  26. 25 11月, 2018 1 次提交
  27. 22 11月, 2018 1 次提交
    • W
      Windows/online (#14474) · d9a1f3e5
      wopeizl 提交于
      * add recordio support
      
      * disable the openblas multi-thread on windows since no support
      adjust the python script
      
      * code style
      
      * code style
      test=develop
      
      * add create_recordio_file_reader back
      
      * fix code style
      test=develop
      
      * fix the gtest.cmake on windows
      
      * fix cc_test on windows
      
      * fix the win build
      test=develop
      
      * remove fused compile support on windows
      test=develop
      
      * add the jit support
      test=develop
      
      * add the jit support, test=develop
      
      * add the jit support, test=develop
      
      * add the jit back
      fix compile error on windows
      
      * rollback test=develop
      
      * test case fix
      
      * disable DSO by default on windows
      
      * exclude warpctc_op on windows
      
      * exclude the dynload_warpctc out on windows
      test=develop
      
      * fix the scripts error
      test=develop
      
      * disable avx on windows by default
      test=develop
      
      * re-organize the cmake file
      
      * disable mkl on windows by default
      
      * add warp_ctc back
      
      * fix the dependency
      
      * fix the dependency
      
      * fix the build issue on windows
      
      * remove unsupported flag on windows
      
      * code style
      
      * code style
      test=develop
      
      * fix issue
      
      * add profiler, parallel_executor back
      
      * clean up the pre-definitions on windows
      
      * fix build issue
      
      * test=develop
      d9a1f3e5
  28. 21 11月, 2018 2 次提交
  29. 19 11月, 2018 7 次提交
  30. 18 11月, 2018 1 次提交