1. 31 10月, 2019 2 次提交
    • H
      GradMaker for dygraph (#19706) · 8c4573a3
      hong 提交于
      * refactor dygraph,test=develop
      
      * fix failed unittest,test=develop
      
      * polish code,test=develop
      
      * check windows ci error,test=develop
      try to fix windows ci error by np.allclose,test=develop
      
      * polish vlog and profiler, test=develop
      
      * try to fix preceding ops order,test=develop
      
      * test transformer in windows ci, test=develop
      
      * use python c-api to speed up tracer.trace,test=develop
      
      * test=develop, fix docker with paddle nccl problem
      
      * test=develop, add ut for debug string and gradient_accumulator
      
      * test=develop, add tests for layer/gradient_accumulator/prepared_op
      
      * test=develop, fix complie error for test_prepared_op
      
      * test=develop, add more ut for dygraph
      
      * test=develop, create API.spec for dygraph api change
      
      * optimize grad maker; test=develop
      
      * optimize grad maker
      
      * test
      
      * grad make optim; test=develop
      
      * fix unittest bugs; test=develop
      
      * add dygraph grad op maker and split_op
      
      * grad op maker refactor; test=develop
      
      * add dygraph grad maker; test=develop
      
      * fix op deformable_conv_v1_op bug; test=develop
      
      * fix deformable_conv prroi pool bugs;
      
      * fix new op grad op maker bug; test=develop
      
      * fix split by ref bug; test=develop
      
      * fix dygraph auto prune bug; test=develop
      
      * fix test_trace bug; test=develop
      
      * fix fused emb seq pool bug; test=develop
      
      * remove useless code in op_desc file; test=develop
      
      * remove useless code, StrVarBaseNode; test=develop
      
      * fix review issues; test=develop
      
      * fix rank_loss grad maker; test=develop
      
      * remove flag in VarBase; test=develop
      
      * fix distributed_notify_op compile bug ; test=develop
      
      * fix reshape op double grad; test=develop
      
      * fix expand as op; test=develop
      
      * add impertive type_defs.h for demo_train; test=develop
      
      * fix inference lib cmake; test=develop
      
      * fix inference lib; test=develop
      
      * fix infernce_lib; test=develop
      
      * fix inference cmake; test=develop
      
      * fix inference lib; test=develop
      
      * fix inference lib; test=develop
      
      * remove condition dygraph grad maker, modify local name; test=develop
      
      * fix split grad maker bug; test=develop
      
      * fix pyramid_op bug; test=develop
      
      * change travis time out limit; test=develop
      
      * restore travis; test=develop
      
      * change timeout limit; test=develop
      8c4573a3
    • Y
      Refine the cache of program, context and scope in executor. (#18483) · 16e4d026
      Yiqun Liu 提交于
      * Refine the cache of program, context and scope in executor.
      test=develop
      
      * Refine the unittest test_executor_and_use_program_cache.
      
      * Add the test the PaddingRNN with use_program_cache=True.
      test=develop
      
      * Remove a check.
      test=develop
      
      * Refine the unittest to check whether it is correct when setting use_program_cache=True.
      test=develop
      16e4d026
  2. 30 10月, 2019 1 次提交
  3. 29 10月, 2019 8 次提交
  4. 28 10月, 2019 2 次提交
  5. 25 10月, 2019 1 次提交
    • X
      fix several sparse table issuses (#20686) · 48669aa8
      xujiaqi01 提交于
      * no longer need to define all embedding layers (no one less) of all slots in each program. make trainer_param repeated in ps.proto.
      * add find_distributed_lookup_table_grads instead of hard code GRAD
      * support embedding stop gradient. push sparse has error before fix this.* 
      * fix fill sparse, skip slots which do not have embedding. each slot's embedding in a sparse table should be used in all training programs before fix this.
      * fix pull sparse, skip slots which do not have embedding.
      * fix collect feasign label info, skip slots which do not have embedding.
      * support when there are multi sparse tables in one or multi training programs, each program can pull/push its own related sparse tables instead of all sparse tables.
      * test=develop
      48669aa8
  6. 24 10月, 2019 3 次提交
  7. 23 10月, 2019 2 次提交
  8. 22 10月, 2019 3 次提交
  9. 21 10月, 2019 1 次提交
  10. 20 10月, 2019 2 次提交
  11. 18 10月, 2019 3 次提交
  12. 17 10月, 2019 1 次提交
  13. 16 10月, 2019 2 次提交
  14. 15 10月, 2019 7 次提交
  15. 14 10月, 2019 2 次提交