1. 30 7月, 2019 1 次提交
  2. 29 7月, 2019 2 次提交
    • Z
      Remove legacy C++ memory optimization codes (#18834) · 8008ab4e
      Zeng Jinle 提交于
      * remove legacy memory optimization codes, test=develop
      
      * follow huihuang's comments,test=develop
      
      * follow luotao's comments, test=develop
      8008ab4e
    • T
      add clear_model interface in fleetwrapper (#18815) · 52c1431e
      Thunderbrook 提交于
      * dump slot
      
      * test
      
      * proto
      
      * dump slot
      
      * test
      
      * proto
      
      * code style
      
      * code style
      
      * code style
      
      * style
      
      * add delete after unseen days
      
      * add unseen days
      
      * code style
      
      * conflict solve
      test=develop
      
      * add clear model
      
      * code style
      test=develop
      
      * code style
      test=develop
      52c1431e
  3. 28 7月, 2019 2 次提交
  4. 27 7月, 2019 2 次提交
  5. 26 7月, 2019 2 次提交
    • A
      Add LeakyReLU MKLDNN support (#18762) · ee022279
      Adam 提交于
      ee022279
    • Z
      Feature/mem opt pass refactor (#18735) · a802da65
      Zeng Jinle 提交于
      * first version memory optimize pass, test=develop
      
      * remove move_tensor_sharing_pass, test=develop
      
      * refine code comments, add unittests, test=develop
      
      * turn off memory_optimize by default, test=develop
      
      * follow huihuang's comments, test=develop
      
      * follow chengduoZH's comments, test=develop
      
      * fix grammar error, add const qualifier, fix pass_test exception message, test=develop
      
      * follow chengduoZH's comments 2nd, test=develop
      a802da65
  6. 25 7月, 2019 4 次提交
  7. 24 7月, 2019 5 次提交
    • B
      Extend Matmul to support matrix multiplication with multiple heads (#18570) · 220eef60
      Bob Zhu 提交于
      * extend matmul op to support multiple head multiplication
      
      With the support of multiple head, the multiplication of two big matrixes is
      split into multiplication of several (head_number) small matrixes. e.g. if
      Mat A is [3, 24] and Mat B is [24, 4], when multiple A and B with head_number
      as 4, Mat A will be split as 4 matrix of [3, 6] and Mat B will be 4 matrix of
      [6, 4]. The result of final matrix will be 4 matrix of [3, 4], i.e. [3, 16].
      220eef60
    • W
      Add python API for appending LoD level (#18702) · 075e1cf7
      whs 提交于
      * Make lod reset op support for append lod level.
      
      * Fix API.spec
      test=develop
      
      * Fix unitest.
      test=develop
      
      * Add python api for lod append.
      test=develop
      
      * Fix API.spec
      test=develop
      
      * Fix format of doc.
      test=develop
      
      * Fix unitest.
      test=develop
      
      * Fix doc.
      test=develop
      075e1cf7
    • C
      Enhance backward process (#18700) · 8259f141
      chengduo 提交于
      * prun backward ops
      test=develop
      8259f141
    • J
      Modify auc doc. Add output variable description, previously was the scalar... · 25c9b57b
      JesseyXujin 提交于
      Modify auc doc. Add output variable description, previously was the scalar type, now changed to the tuple type.test=develop (#18771)
      
      25c9b57b
    • T
      add slot to sparse table (#18686) · d8396281
      Thunderbrook 提交于
      The change includes 2 things:
      
      1. save delta model and shrink table are control by the same parameter before, now add delete_after_unseen_days to control shrink table.
      2. value in sparse table has no slot before, now add slot in sparse table, and add DownpureCtrAccessor to support the new meta.
      test=develop
      d8396281
  8. 23 7月, 2019 3 次提交
    • J
      support patch data, add load_one_table, fix bug (#18509) · d18aabb4
      jiaqi 提交于
      (1)support patch data (merge slots of instances of same line id, modify dense layer which
      changes its size)
      (2)add fleet load_one_table interface, support load from paddle model and load from pslib model
      (3)fix push sparse bug which cause push sparse cost more time(about 10% in my testcase)
      (4)when some slots are not in one of your network (join/update, etc.),data feed、collect label info、push/pull sparse will skip these slots, instead of throw error.
      (5)add more debug info in TrainFilesWithProfiler
      d18aabb4
    • C
      Make fuse_optimizer_op_pass also work when the model contains sparse gradients. (#18664) · fd3aad6c
      chengduo 提交于
      * support sparse gradients
      test=develop
      fd3aad6c
    • Y
      supports distributed classification (#18690) · 157211c4
      Yi Liu 提交于
      * supports distributed classification training
      * update API.spec
      * fix evenly division in python3
      * change "index_range" to "index_num" in shard_index operator
      test=document_preview
      test=develop
      157211c4
  9. 22 7月, 2019 6 次提交
  10. 19 7月, 2019 3 次提交
  11. 18 7月, 2019 3 次提交
  12. 15 7月, 2019 2 次提交
  13. 12 7月, 2019 3 次提交
  14. 11 7月, 2019 2 次提交
    • G
    • Z
      Feature/buffer_shared_inplace (#17911) · d3003a16
      Zeng Jinle 提交于
      * feature/buffer_shared_inplace, test=develop
      
      * refine code, test=develop
      
      * fix elementwise_add op cpu inplace and sum inplace bug, test=develop
      
      * add unittest and debug log, test=develop
      
      * fix parallel_executor scope bug, polish code, test=develop
      
      * fix sum op, activation op, single_in_place_inference bug, test=develop
      
      * remove kLocalExecScopeName, test=develop
      
      * fix unittest,test=develop
      
      * fix out_var first version bug, test=develop
      
      * follow comments,test=develop
      d3003a16