1. 10 12月, 2021 2 次提交
  2. 07 12月, 2021 1 次提交
  3. 01 12月, 2021 2 次提交
  4. 30 11月, 2021 1 次提交
  5. 26 11月, 2021 1 次提交
  6. 04 11月, 2021 1 次提交
  7. 29 10月, 2021 1 次提交
  8. 28 10月, 2021 1 次提交
  9. 27 10月, 2021 1 次提交
  10. 20 10月, 2021 1 次提交
  11. 19 10月, 2021 1 次提交
  12. 18 10月, 2021 1 次提交
  13. 14 10月, 2021 2 次提交
  14. 11 10月, 2021 1 次提交
  15. 22 9月, 2021 1 次提交
  16. 21 9月, 2021 1 次提交
    • A
      Reuse OneDNN handler for SGD and SUM for SelectedRows input tensors. (#35510) · 799f3861
      Adam Osewski 提交于
      * Create stateful OneDNNAXPYHandler object.
      
      This makes it possible to call it multiple times without recreating the
      oneDNN primitives every time.
      
      * Prepare SGDOpKernel to reuse its implementation from OneDNN kernel.
      
      * OneDNN SGD kernel.
      
      * Update call to use new OneDNNAXPYHandler object api.
      
      * Setup seed in proper place.
      
      * Enable OneDNN kernel only for single case.
      
      * For dense param and sparse grad.
      
      * Small refactor.
      
      * Enable oneDNN by op attr or by cmd line flag.
      
      * Use int64_t type for number of elements.
      
      * Support dense param and grad from OneDNN kernel.
      
      * Enable SGD OneDNN kernel when use MP BF16 optimizer.
      
      * Force non-copyable/movable OneDNNAXPYHandler.
      
      * Reuse OneDNNAXPYHandler for spare tensors in SUM op.
      
      * Fix SFINAE rules.
      
      * Remove recording event inside AXPY.
      
      * Get rid of internal primitive caching.
      
      * Stop use PP cache mechanims to store mem and primitive obj.
      * Handler obj store and reuse needed desc & prim
      
      * Do not derive from MKLDNNHandlerT
      799f3861
  17. 17 9月, 2021 1 次提交
    • Z
      [AMP] Support pure fp16 training mode for dygraph (#35521) · adaeee4d
      zhangbo9674 提交于
      * add pure fp16 major function in auto_cast & tracer
      
      * support master weight in dygraph for pure fp16
      
      * check mix dtype of fp16&fp32 for check_finite_and_unscale op
      
      * change pure fp16 funtion name
      
      * refine some bug in auto_cast
      
      * refine auto_cast interface logic
      
      * add param _casted_by_pure_fp16 for class Layer
      
      * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator
      
      * refine pure_fp16_decorator as decorator
      
      * add unittest
      
      * add comment
      
      * add comment
      
      * support recompute
      
      * add comment for auto_cast and decorator
      
      * support to_static_state_dict for paddle.jit.save
      
      * unlimite models num and optimizers num
      
      * add lookup_table in black_list
      
      * fix momentum and layer state_dict
      
      * fix bug in layer state_dict
      
      * fix bug in layer state_dict_helper
      
      * refine unittest
      
      * refine test_momentun_op
      
      * refine interface and some code
      
      * refine amp_decorator interface
      
      * refine pure fp16 interface
      
      * refine master weight interface
      adaeee4d
  18. 15 9月, 2021 1 次提交
  19. 13 9月, 2021 3 次提交
  20. 10 9月, 2021 2 次提交
  21. 09 9月, 2021 1 次提交
  22. 06 9月, 2021 1 次提交
  23. 03 9月, 2021 1 次提交
  24. 01 9月, 2021 1 次提交
  25. 31 8月, 2021 1 次提交
  26. 26 8月, 2021 1 次提交
  27. 24 8月, 2021 1 次提交
  28. 18 8月, 2021 1 次提交
  29. 17 8月, 2021 1 次提交
  30. 16 8月, 2021 1 次提交
  31. 10 8月, 2021 1 次提交
  32. 05 8月, 2021 1 次提交
  33. 30 7月, 2021 1 次提交
  34. 28 7月, 2021 1 次提交