1. 18 7月, 2019 2 次提交
    • Z
      Feature/auto_growth_allocator (#18561) · ae58afc5
      Zeng Jinle 提交于
      * feature/auto_growth_allocator, test=develop
      
      * add unittest of AlignedAllocator, test=develop
      
      * try to turn on auto_growth to test on CI, test=develop
      
      * fix segmentation fault in mixed_vector.h, test=develop
      
      * add unittests, test=develop
      ae58afc5
    • H
      hash_op support int64 hash_size (#18674) · bb2f5d24
      hutuxian 提交于
      * hash_op support int64 hash_size
      * add corresponding UT
      bb2f5d24
  2. 15 7月, 2019 2 次提交
  3. 12 7月, 2019 2 次提交
  4. 11 7月, 2019 2 次提交
    • G
    • Z
      Feature/buffer_shared_inplace (#17911) · d3003a16
      Zeng Jinle 提交于
      * feature/buffer_shared_inplace, test=develop
      
      * refine code, test=develop
      
      * fix elementwise_add op cpu inplace and sum inplace bug, test=develop
      
      * add unittest and debug log, test=develop
      
      * fix parallel_executor scope bug, polish code, test=develop
      
      * fix sum op, activation op, single_in_place_inference bug, test=develop
      
      * remove kLocalExecScopeName, test=develop
      
      * fix unittest,test=develop
      
      * fix out_var first version bug, test=develop
      
      * follow comments,test=develop
      d3003a16
  5. 10 7月, 2019 1 次提交
  6. 09 7月, 2019 2 次提交
  7. 05 7月, 2019 3 次提交
    • Z
      Fix topk cannot handle 1D vector bug (#18466) · 832d8191
      zhaoyuchen2018 提交于
      * Fix topk cannot handle 1D vector bug
      
      Add path to handle 1D vector
      
      test=develop
      Signed-off-by: Nzhaoyuchen <zhaoyuchen01@baidu.com>
      
      * refine code
      
      test=develop
      Signed-off-by: Nzhaoyuchen <zhaoyuchen01@baidu.com>
      832d8191
    • J
      Hide no support (#18515) · 7586cdd5
      Jiabin Yang 提交于
      * test=develop, fix docker with paddle nccl problem
      
      * test=develop, hide no_support api and add ut for it
      7586cdd5
    • L
      Add distributions of normal and uniform (#18023) · 43e17c79
      LielinJiang 提交于
      * add_distributions_of_normal_and_uniform
      
      * paddle/fluid/API.spec
      
      * modify API.spec
      
      * modified paddle/fluid/API.spec, test=develop
      
      * modify paddle/fluid/API.spec, test=develop
      
      * modify paddle/fluid/API.spec, test=develop
      
      * fix some comment, test=develop
      
      * modify API.spec, test=develop
      
      * add comment for init function, modify hard code, test=develop
      
      * modify API.spec, test=develop
      
      * modify API.spec, test=develop
      
      * make unit test function shorter, test=develop
      
      * modify paddle/fluid/API.spec
      43e17c79
  8. 04 7月, 2019 2 次提交
  9. 03 7月, 2019 7 次提交
  10. 02 7月, 2019 2 次提交
    • Y
      supports collective training with programs (#18392) · a873fa84
      Yi Liu 提交于
      1. Since allreduce op has 4 reduce types, We split these four reduce types into four ops
      2. We also refined the collective op code, e.g. we separated the collective op kernel into CPUKernel and CUDAKernel, and remove the device specified DeviceContext parameter in template as we already knew the target DeviceContext
      3. We remove the newly added Collective op role to reduce the complexity of program and graph analysis
      a873fa84
    • C
      Add find_no_grad_vars in backward.py (#17942) · e0d8c6ac
      chengduo 提交于
      * add not_been_used_vars to no_grad_set
      test=develop
      e0d8c6ac
  11. 01 7月, 2019 1 次提交
  12. 27 6月, 2019 2 次提交
    • K
      add WITH_COVERAGE option, default OFF (#17872) · 27fb9cad
      kh2se2013 提交于
      * add WITH_COVERAGE option, default OFF
      
      test=develop
      
      * add coverage for python sdk
      
      test=develop
      
      * fix code style
      
      * fix COVERAGE_FILE path
      
      test=develop
      
      * remove coverage package
      
      test=develop
      
      * test = develop, run coverage as module
      27fb9cad
    • H
      supports collective communicated training (#18175) · b7128bac
      HaoRen 提交于
      * fix prepare context redundant code problem, optimize executor by caching create_varaiables
      test=develop
      
      * supports collective training in executor
      
      * make fetch_list runable with variables, add more unittest for use_program_cache
      test=develop
      
      * fix comment
      test=develop
      
      * use unique name for nccl_id
      
      * supports output to stream in program_to_code
      
      * insert sync_comm_stream before regularization; add skip_op_callstack capability in program_to_code
      
      * set op role in collective training
      
      * add collective op role
      
      * remove orig file
      
      * add build optimizer by strategy
      
      * add collective strategy
      
      * refine collective strategy
      
      * add multi-process role maker
      
      * refine strategy building factory so that we can easily plugin more strategy
      
      * scale loss grad in collective sgd transpiler
      
      * add support for distributed fc
      
      * code format
      
      * revert some features for dist fc
      
      * add support for distributed fc training
      
      * fix prepare context redundant code problem, optimize executor by caching create_varaiables
      test=develop
      
      * supports collective training in executor
      
      * make fetch_list runable with variables, add more unittest for use_program_cache
      test=develop
      
      * use unique name for nccl_id
      
      * supports output to stream in program_to_code
      
      * insert sync_comm_stream before regularization; add skip_op_callstack capability in program_to_code
      
      * set op role in collective training
      
      * add collective op role
      
      * fix comment
      test=develop
      
      * remove orig file
      
      * add build optimizer by strategy
      
      * add collective strategy
      
      * refine collective strategy
      
      * add multi-process role maker
      
      * refine strategy building factory so that we can easily plugin more strategy
      
      * scale loss grad in collective sgd transpiler
      
      * add support for distributed fc
      
      * code format
      
      * revert some features for dist fc
      
      * add support for distributed fc training
      
      * test=develop
      add collective op unittest standard
      
      * test=develop
      remove the test_collective directory
      
      * test=develop
      remove the test_collective directory
      
      * remove slicegather test
      
      * code format for reducescatter
      
      * update attr of shard_index_op
      
      * Modify macro nccl_helper
      
      * remove test without distribute
      
      * macro collective_helper
      
      * marcro update
      
      * test=develop
      update support python3.5
      
      * test=develop change gpu memory use to 0.1 when test
      
      * test=develop
      update ut equal func
      
      * test=develop
      set flags to 1.5
      
      * test=develop fix pickle dumple  py35
      
      * test=develop
      fix divide in slice and add sync_comm_stream
      update atol and rtol to 1e-05
      rm shard_index op and test
      modify read input from file to read from memory
      remove origin_program in framework and add i/o in c_sync_calc_stream
      
      * test=develop update unittest sync operator I/O
      b7128bac
  13. 26 6月, 2019 4 次提交
  14. 25 6月, 2019 1 次提交
    • H
      Sequence mask support tensor (#18249) · df2eee71
      Hongyu Liu 提交于
      * sequnce mask support max length tensor input; test=develop
      
      * add rnn_impl.py; test=develop
      
      * add basic gru lstm unittest; test=develop
      
      * fix api spec; test=develop
      
      * fix sequence_mask op bug;
      test=develop
      test=document_preview
      
      * change +-*x to elmentwise_op; test=develop
      
      * add mkl flag; test=develop
      
      * fix rnn impl bug; test=develop
      
      * update api spec; test=develop
      
      * fix doc bug; test=develop
      
      * fix lstm bugs; test=develop
      df2eee71
  15. 23 6月, 2019 1 次提交
  16. 21 6月, 2019 2 次提交
  17. 20 6月, 2019 2 次提交
  18. 19 6月, 2019 2 次提交