1. 01 2月, 2019 1 次提交
  2. 10 1月, 2019 1 次提交
    • W
      [Feature] support mix precision training for resnet (#14899) · fd854183
      Wu Yi 提交于
      * clip softmax for fp16
      
      * updates
      
      * fuse xent support fp16 test=develop
      
      * wip
      
      * wip
      
      * add simple row reduce
      
      * wip fp16 accurate softmax
      
      * add accurate softmax kernel for fp16 test=develop
      
      * update test=develop
      
      * fix cpu build test=develop
      
      * update api.spec test=develop
      
      * follow comments test=develop
      
      * fix build test=develop
      
      * fix trt build test=develop
      
      * fix inference build test=develop
      
      * fix merge test=develop
      
      * update test=develop
      
      * try fix build test=develop
      
      * fix build test=develop
      
      * rename real_exp test=develop
      
      * fortest
      
      * remove hacky kernels test=develop
      
      * clean up test=develop
      fd854183
  3. 15 8月, 2018 1 次提交
  4. 12 7月, 2018 1 次提交
  5. 11 7月, 2018 1 次提交
  6. 18 6月, 2018 1 次提交
  7. 12 3月, 2018 1 次提交
  8. 26 2月, 2018 1 次提交
  9. 24 2月, 2018 2 次提交
  10. 13 2月, 2018 1 次提交
    • X
      Run Python OP tests in a single Python process to improve test time. (#8362) · cde6241a
      Xin Pan 提交于
      Currently, our tests run with 2 GPUs, the init time is absurdly long:
      about 4s for each process.  Currently, we run each OP test on
      different processes. This PR:
      
      1. create cmake function py_test_modules which will generate the
      Makefile that runs a list of Python unittest module in a single Python
      process.
      
      2. move all "python unittest compatible" (e.g., used the unittest
      package, not just a regular python file). from fluid/tests to
      fluid/tests/unittests.
      
      3. cmake now will run all OP tests in fluid/tests/unittests in a
      single process, except the time-consuming tests, they are separated
      into different processes to utilize parallelism. Please make sure to
      use the unittest package if you put the python test file in
      fluid/tests/unittests
      
      4. remove all exit(0) from fluid/tests/unittests/*.py, exit(0) is used
      to disable unittest, we can not do it when running all tests in a
      single process since it will terminate the process without running the
      other tests. Instead, the test is disabled in
      fluid/tests/unittests/CMakeLists.txt. FIXME is added for each disabled
      item. Please disable the unittest from
      fluid/tests/unittests/CMakeLists.txt, instead of adding exit(0) to the
      Python file, for all Python file in fluid/tests/unittests/.
      
      5. add an option WITH_FAST_BUNDLE_TEST. When OFF, will run the unit
      tests in separate process so that they can be tested individually.
      cde6241a
  11. 12 2月, 2018 1 次提交
  12. 08 2月, 2018 1 次提交
  13. 21 1月, 2018 1 次提交
    • D
      "fix decode bug" (#7711) · e983cc90
      dzhwinter 提交于
      * "fix decode bug"
      
      * "follow commnet"
      
      * "fix error"
      
      * "fix hook bug"
      
      * fix based comment
      
      * fix copyright
      
      * fix based on comment
      e983cc90
  14. 15 1月, 2018 1 次提交
    • D
      Feature/hooks (#7513) · b9b75377
      dzhwinter 提交于
      * add copyright hook
      
      * add copyright hook
      
      * refine copyright hook
      
      * "test copyright hook"
      
      * fix check style
      
      * fix ci
      b9b75377
  15. 25 12月, 2017 1 次提交
  16. 15 12月, 2017 1 次提交
  17. 18 11月, 2017 1 次提交
  18. 15 11月, 2017 1 次提交
  19. 14 11月, 2017 1 次提交
  20. 10 11月, 2017 1 次提交
  21. 02 11月, 2017 1 次提交
    • Q
      Optimizer use init program (#5275) · f48159ad
      Qiao Longfei 提交于
      * optimizer use init_program
      
      * create persistable variable
      
      * add create_persistable_var to block
      
      * optimizer use create_persistable_var
      
      * fix prefix
      
      * move create_global_persistable_var from Block to LayerHelper
      
      * Polish Optimizer initialization code.
      
      * Using the LayerHelper to create initialize operator and variables
      
      * add_accumulator should use an independent data type
      
      * default use param data type for accumulator
      f48159ad
  22. 28 10月, 2017 1 次提交
  23. 26 10月, 2017 2 次提交
  24. 25 10月, 2017 2 次提交
  25. 20 10月, 2017 2 次提交
  26. 18 10月, 2017 1 次提交
    • Q
      Impl optimizer (#4734) · df0946eb
      Qiao Longfei 提交于
      * init parameter base class
      
      * optimize the Comments of optimizer
      
      * basic implimentation of optimizer
      
      * add test_optimizer
      
      * add no_grad_set to interface
      
      * update optimizer.py
      
      * python code can run
      
      * fix some problem
      
      * add sync_with_cpp to Python Program and Block
      
      * sync vars and ops in block from cpp
      
      * optimize code and add some comment
      
      * add more check for sync
      
      * update optimizer with return value of Backward
      
      * rm unused code
      
      * infer shape when create gradient vairiable
      
      * update test_optimizer
      
      * update test_program.py
      
      * update backward test
      
      * follow comment
      df0946eb