1. 16 5月, 2018 3 次提交
  2. 15 5月, 2018 1 次提交
  3. 14 5月, 2018 3 次提交
  4. 11 5月, 2018 2 次提交
    • T
      delete checkpoint function · 2a05b3d5
      tangwei12 提交于
      2a05b3d5
    • F
      trainer.test() (#10453) · ba57348f
      fengjiayi 提交于
      * a draft of trainer.test()
      
      * polish trainer.test()
      
      * polish trainer.test()
      
      * update code format
      
      * update
      
      * polish code
      
      * polish code
      
      * polish code
      
      * Make trainer.test follow the rule of returning [loss, metric, metric, ..]
      ba57348f
  5. 10 5月, 2018 3 次提交
  6. 09 5月, 2018 2 次提交
  7. 08 5月, 2018 8 次提交
  8. 07 5月, 2018 12 次提交
  9. 05 5月, 2018 1 次提交
    • J
      Trainer save load params (#10386) · bd66eed5
      Jeff Wang 提交于
      * Load/save the params from the params_path
      
      * Switch to use load_persistables and save_persistables
      
      * Instaed of setup the executor to run program and scope. Pass the program to the load_persistables
      bd66eed5
  10. 04 5月, 2018 5 次提交
    • Q
      Added auto transform to beam_search_decode_op (#10286) · 3bb99c4f
      Qingsheng Li 提交于
      * Added auto transform to beam_search_decode_op
      
      * Added some comment
      
      * Added unittest for beam_search_decode_op on GPU
      3bb99c4f
    • Y
      Correct filename (#10384) · ddf61672
      Yi Wang 提交于
      ddf61672
    • K
      initial commit (#10387) · 8cc91bc0
      Kexin Zhao 提交于
      8cc91bc0
    • K
      Add float16 demo code and put float16 work in contrib/float16 folder (#10331) · 7a860694
      Kexin Zhao 提交于
      * add test float16 inference accuracy example
      
      * complete the test
      
      * clean code
      
      * add argument parse and refine tests
      
      * add shell script
      
      * add float16 benchmark code
      
      * refine code
      
      * prepare for contrib/float16
      
      * put things in contrib float16 folder
      
      * update benchmark result
      
      * further update benchmark report
      
      * add float16 inference report
      
      * update report
      7a860694
    • H
      Fluid new API: dist train without modifying code · 8ee23da8
      Helin Wang 提交于
      Works with 1 trainer 1 pserver. 2 trainer 1 pserver will stuck at the
      end of first step, still investigating.
      
      The user only need to set envrionment variables to enable distributed
      training.
      
      run pserver:
      
      PADDLE_TRAINING_ROLE=PSERVER PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=2 PADDLE_CURRENT_IP=127.0.0.1 python no_test_word2vec_new_api.py
      
      run trainer:
      
      PADDLE_TRAINING_ROLE=TRAINER PADDLE_PSERVER_IPS=127.0.0.1 PADDLE_TRAINERS=2 PADDLE_TRAINER_ID=0 python no_test_word2vec_new_api.py
      8ee23da8