1. 16 11月, 2022 2 次提交
    • H
      clean fluid elementwise_max (part2): remove API (#48034) · b68e0c47
      HongyuJia 提交于
      b68e0c47
    • W
      [remove fluid] under fleet meta_optimizers (#47864) · a2a97cbb
      wangzhen38 提交于
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      
      * [remove fluid] under fleet meta_optimizers
      a2a97cbb
  2. 14 11月, 2022 2 次提交
  3. 10 11月, 2022 2 次提交
  4. 09 11月, 2022 3 次提交
  5. 08 11月, 2022 2 次提交
  6. 07 11月, 2022 1 次提交
  7. 05 11月, 2022 1 次提交
  8. 04 11月, 2022 1 次提交
  9. 03 11月, 2022 3 次提交
  10. 02 11月, 2022 2 次提交
  11. 01 11月, 2022 4 次提交
    • N
      [CodeStyle][E711] use `is`/`is not` for comparison with `None` (#47452) · a35a4a53
      Nyakku Shigure 提交于
      * [CodeStyle][E711] use `is`/`is not` for comparison with `None`
      
      * `self.assertTrue($A is None)` -> `self.assertIsNone($A)`
      
      * `self.assertTrue($A is not None)` -> `self.assertIsNotNone($A)`
      
      * `self.assertFalse($A is None)` -> `self.assertIsNotNone($A)`
      
      * `self.assertEqual($A, None)` -> `self.assertIsNone($A)`
      
      * `self.assertNotEqual($A, None)` -> `self.assertIsNotNone($A)`
      a35a4a53
    • N
      [CodeStyle][E712] use `if cond`/`if cond is True` for comparison with `True` (#47464) · 5a2ab683
      Nyakku Shigure 提交于
      * [CodeStyle][E712] use `if cond`/`if cond is True` for comparison with `True`
      
      * revert changes in fluid
      
      * revert unrelated file
      
      * revert changes in norm
      
      * revert changes in auto_parallel_amp
      
      * fix norm and auto_parallel_amp
      
      * revert a typo fix due to fixed at #47477
      5a2ab683
    • N
      [CodeStyle][py2] remove `six` package (part2) (#47334) · 3592ba8c
      Nyakku Shigure 提交于
      * [CodeStyle][py2] remove `six` package (part2)
      
      * six.ensure_str
      
      * remove unused `import six`
      
      * remove six from BUILTIN_LIKELY_MODULES
      
      * remove six in example code
      
      * remove some decode
      
      * try to fix example code
      
      * fix MockEtcdClient get/get_prefix returns data type
      
      * fix MockEtcdClient get_prefix returns data
      
      * fix MockEtcdClient get returns data
      
      * remove `six` in pypi and conda requirements
      
      * fix MockEtcdClient add_watch_callback/add_watch_prefix_callback returns data type
      
      * refine MockEtcdClient
      3592ba8c
    • S
      add missing scale parameter (#47519) · ad251cb5
      sneaxiy 提交于
      ad251cb5
  12. 28 10月, 2022 1 次提交
  13. 24 10月, 2022 2 次提交
  14. 23 10月, 2022 1 次提交
  15. 21 10月, 2022 2 次提交
  16. 20 10月, 2022 3 次提交
  17. 19 10月, 2022 3 次提交
  18. 18 10月, 2022 1 次提交
  19. 17 10月, 2022 2 次提交
    • G
      Add enable_partial_send_recv switch in pipeline_configs (#46992) · b9a2f29c
      Ghost Screaming 提交于
      * Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result
      is wrong.
      
      * Support allow_partial switch, which can be configure in
      pipeline_configs. If sent tensor are not the same from
      different hosts, they shouldn't been sent partially and
      then concated as a whole tensor.
      
      * Change name allow_partial to enable_partial_send_recv.
      
      * Add global variable _enable_partial_send_recv
      b9a2f29c
    • G
      Support BF16 training for sharding (#46846) · 0b39b244
      Ghost Screaming 提交于
      * Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result
      is wrong.
      
      * support pure bfloat16
      
      * support bf16 linear
      
      * update PR to pass CI
      
      * tiny fix where_grad_kernel.cu
      
      * Support bfloat16 type for reducer and sharding.
      
      * Fix some bug.
      
      * Polish code.
      
      * Polise code.
      
      * Add bfloat16 datatype in fill_grad kernels.
      Co-authored-by: Nsneaxiy <sneaxiy@126.com>
      0b39b244
  20. 13 10月, 2022 1 次提交
  21. 12 10月, 2022 1 次提交