1. 15 5月, 2023 1 次提交
  2. 30 12月, 2022 1 次提交
  3. 09 12月, 2022 1 次提交
  4. 02 12月, 2022 1 次提交
    • Y
      clear fluid apis: square_error_cost (#48029) · 33173ab4
      yuehuayingxueluo 提交于
      * clear fluid apis in fleet and passes
      
      * fix model.py
      
      * fix model.py
      
      * fix cpp_pass.py
      
      * clear loss.py
      
      * change test file
      
      * fix some test_*.py
      
      * fix adaround.py
      
      * fix evaluator.py
      
      * fix CI bug
      
      * fix CI bug
      
      * fix decode.py
      
      * fix detection.py
      
      * fix ci bug
      
      * rm test_sigmoid_cross_entropy_with_logits_op_ipu.py and fix __init__.py
      
      * fix ci bug
      
      * fix ci BUG
      33173ab4
  5. 03 11月, 2022 1 次提交
  6. 23 10月, 2022 1 次提交
  7. 14 9月, 2022 1 次提交
  8. 26 8月, 2022 1 次提交
  9. 05 6月, 2022 1 次提交
    • S
      【code format check upgrade】 step2:yapf (#42944) · a072fca8
      Sing_chan 提交于
      * use yapf to format all python file
      
      * yapf exclude two unittests file for they rely on writing and reading file, and format will break them
      
      * disable diff_py_file because too many diff files cause command following failed
      a072fca8
  10. 25 3月, 2022 1 次提交
    • J
      Refactor Dygraph Flags (#40786) · 3085d5e4
      Jiabin Yang 提交于
      * refactor eager flags
      
      * fix flags error when we switch from eager to dygraph
      
      * fix ci problem
      
      * fix ci
      
      * fix ci
      
      * merge develop and fix code style
      
      * merge develop and fix code style
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * merge develop
      3085d5e4
  11. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  12. 15 7月, 2021 1 次提交
  13. 08 7月, 2021 1 次提交
  14. 02 12月, 2020 1 次提交
    • Z
      Add pure fp16 training with master weights. (#27712) · be3777a5
      Zhen Wang 提交于
      * add the weight decay func for the momentum op
      
      * Add the multi_precision function in Momentum Optimizer.
      
      * Make sure that the initial value of master weights are same with the fp16 weights.
      
      * add static loss scaling.
      
      * add the rescale_grad function in the pure fp16 training.
      
      * use the original momentum updating method.
      
      * Polish some codes, such as variable names.
      
      * add docstring for apis.
      
      * update the var creation details of _create_master_weight.
      
      * not modify codes about imperative momentum updating.
      
      * Fix the error of test_dist_sparse_tensor_load_momentum UT.
      
      * add unit test for multi precision fp16 training.
      
      * add more unit tests for CI.
      
      * Use lower threshold values for allclose comparing in test_multi_precision_fp16_train UT.
      
      * For CI Coverage Checking.
      be3777a5
  15. 01 12月, 2020 1 次提交
  16. 23 11月, 2020 1 次提交