1. 02 8月, 2019 1 次提交
    • J
      support filelist size < trainer num && fix pull dense (#18956) · 02c370c3
      jiaqi 提交于
      * support filelist size < trainer num
      * pull dense when stop, to make sure local dense params are same as pserver, so save paddle model will save dense model same as pserver
      *  enable QueueDataset train same filelist for serveral times
      02c370c3
  2. 31 7月, 2019 1 次提交
    • J
      set fleet_send_batch_num a default value according to trainer num · 233746d8
      jiaqi 提交于
      (1) set fleet_send_batch_num a default value according to trainer num, the previous 80000 is fixed,if trainer num is much less or larger than 100,global shuffle may have timeout error.
      
      (2) fix load one table bug, add barrier
      233746d8
  3. 23 7月, 2019 1 次提交
    • J
      support patch data, add load_one_table, fix bug (#18509) · d18aabb4
      jiaqi 提交于
      (1)support patch data (merge slots of instances of same line id, modify dense layer which
      changes its size)
      (2)add fleet load_one_table interface, support load from paddle model and load from pslib model
      (3)fix push sparse bug which cause push sparse cost more time(about 10% in my testcase)
      (4)when some slots are not in one of your network (join/update, etc.),data feed、collect label info、push/pull sparse will skip these slots, instead of throw error.
      (5)add more debug info in TrainFilesWithProfiler
      d18aabb4
  4. 21 6月, 2019 1 次提交
    • J
      dataset (#17973) · 3f8031e2
      jiaqi 提交于
      (1) use channel instead of vector/BlockingQueue in Dataset,to keep same with existing implementation, and make code more readable and flexible (dataset single output channel or multi output channel). one previous memory out of limit problem is cause by not release memory after training.
      (2) add Record because MultiSlotType costs too much memory (80B),fix memory out of limit problem.
      (3) add Channel, Archive in paddle/fluid/framework
      (4) change dataset from shared_ptr to unique_ptr in pybind
      (5) move create/destroy readers from trainer to dataset
      (6) move shuffle from datafeed to dataset. dataset holds memory, datafeed is only for load data and feed data to network.
      (7) fix thread num bug of Dataset when filelist size < thread num
      (8) support set_queue_num in InMemoryDataset
      3f8031e2
  5. 11 6月, 2019 1 次提交
    • H
      Pipeline Concurrency (#17402) · 969e6378
      hutuxian 提交于
      Add Pipeline Concurrency Train Mode:
      - Cpp: pipeline_trainer & section_worker
      - Python: PipelineOptimizer
      - Add a new data_feed type: PrivateInstantDataFeed
      - Add a test demo of pipeline trainer and the test model is gnn
      - Do not support win32 now
      969e6378
  6. 18 5月, 2019 1 次提交
  7. 15 5月, 2019 1 次提交
    • J
      add save/load model, shrink table, cvm, config file & fix pull dense bug (#17118) · 66d51206
      jiaqi 提交于
      * add save/load model, shrink table, cvm, config file & fix pull dense bug
      test=develop
      
      * fix global shuffle bug, fix pull dense bug, fix release memeory bug, fix shrink error
      add client flush, add get data size
      test=develop
      
      * fix global shuffle bug
      test=develop
      
      * fix global shuffle bug
      test=develop
      
      * fix code style
      test=develop
      
      * fix code style & modify pslib cmake
      test=develop
      
      * fix error of _role_maker
      test=develop
      
      * fix code style
      test=develop
      
      * fix code style
      test=develop
      
      * fix code style
      test=develop
      
      * fix code style
      test=develop
      
      * fix code style
      test=develop
      
      * fix windows compile error of fleet
      test=develop
      
      * fix global shuffle bug
      
      * add comment
      test=develop
      
      * update pslib.cmake
      test=develop
      
      * fix fill sparse bug
      test=develop
      
      * fix push sparse bug
      test=develop
      66d51206
  8. 25 4月, 2019 1 次提交
  9. 11 4月, 2019 2 次提交
  10. 10 4月, 2019 2 次提交
  11. 04 4月, 2019 1 次提交
  12. 03 4月, 2019 1 次提交
  13. 01 4月, 2019 1 次提交
  14. 29 3月, 2019 12 次提交