- 15 3月, 2022 1 次提交
-
-
由 furnace 提交于
* [NPU] add AMP O1 support * [NPU] fix NOTE and warnings
-
- 07 3月, 2022 1 次提交
-
-
由 zhangbo9674 提交于
* refine amp.decorate code example * refine code
-
- 28 2月, 2022 1 次提交
-
-
由 zhangbo9674 提交于
* refine bf16 amp-o1 logic * refine amp GLOG * refine unittest * refine unittest
-
- 27 2月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* fix pylayer problem with amp * add ut * refine code
-
- 23 2月, 2022 1 次提交
-
-
由 Leo Chen 提交于
* fix 'is with a literal' * fix typo
-
- 22 2月, 2022 1 次提交
-
-
由 Leo Chen 提交于
-
- 18 2月, 2022 1 次提交
-
-
由 zhangbo9674 提交于
* support dtype param for auto_cast * add amp_dtype for tracer * add unsupported bf16 list * support bf16 amp for O2 * refine python interface for bfloat16 * refine code * refine code * refine unittest * refine code * refine code * add bf16 o1 * refine code by comment * add gradient accumulator * add recompute
-
- 11 1月, 2022 1 次提交
-
-
由 zhangbo9674 提交于
* check amp.decorate and DataParallel * refine coverage * fix layer dtype * refine code
-
- 29 12月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add bn_1d_2d_3d for fp16 decorate * add unittest
-
- 28 12月, 2021 1 次提交
-
-
由 Li Min 提交于
* Fix scatter_op fp16 perf problem. * Add scatter into black list. * Add scatter into black list for dygraph.
-
- 27 12月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* fix bug * refine code * refine code * refine code
-
- 15 12月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
-
- 02 12月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
-
- 29 11月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* amp.decorate optimizers set to None is ok * refine unittest * add unittest and refine example code * refine unittest
-
- 24 11月, 2021 1 次提交
-
-
由 0x45f 提交于
* run dy2stat pure fp16 in Linear model * no use self._pure_fp16_inputs * add test and fix Adam error in dy2stat pure fp16 training * use paddle.optimizer.Adam * run test in gpu * change test time for CI * enlarge atol for test_resnet_pure_fp16 * refine code and enlarge atol * make custom_white_list and custom_black_list take effect for AMP and pure fp16 * check tracer is not None * use default atol * change filter_size * change atol and add some NOTE
-
- 09 11月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* refine layer to * delete comment * refine logic * refine code * refine pure_fp16_init * refine comment
-
- 22 10月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* [hapi] support dygrapg amp O2 * fix problem of static pure fp16 in hapi * fix bug * fix format * fix ut * follow comments * update ut * update amp save/load * fix ut * refine code format
-
- 13 10月, 2021 2 次提交
-
-
由 zhangbo9674 提交于
* add attr is_distributed * refine code * refine black/white list for pure fp16
-
由 Leo Chen 提交于
* refine amp level * fix typo * update tracer._amp_level
-
- 22 9月, 2021 2 次提交
-
-
由 zhangbo9674 提交于
* split minimize() to step() + update() * add unscale and step for grad_scaler * add unittest * refine code in minimize * delete step in loss_scaler * fix example bug * refine comment * refine unittest * add unittest
-
由 zhangbo9674 提交于
-
- 17 9月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add pure fp16 major function in auto_cast & tracer * support master weight in dygraph for pure fp16 * check mix dtype of fp16&fp32 for check_finite_and_unscale op * change pure fp16 funtion name * refine some bug in auto_cast * refine auto_cast interface logic * add param _casted_by_pure_fp16 for class Layer * support state_dict hook for save model by user appointed dtype in pure_fp16_decorator * refine pure_fp16_decorator as decorator * add unittest * add comment * add comment * support recompute * add comment for auto_cast and decorator * support to_static_state_dict for paddle.jit.save * unlimite models num and optimizers num * add lookup_table in black_list * fix momentum and layer state_dict * fix bug in layer state_dict * fix bug in layer state_dict_helper * refine unittest * refine test_momentun_op * refine interface and some code * refine amp_decorator interface * refine pure fp16 interface * refine master weight interface
-
- 10 9月, 2021 1 次提交
-
-
由 ShenLiang 提交于
-
- 16 8月, 2021 1 次提交
-
-
由 Leo Chen 提交于
* dygraph amp support param_group * remove unused code * fix doc
-
- 11 8月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add state_dict and load_state_dict and unittest for class GradScaler * refine unittest for coverage of load_state_dict * refine comments of code-block * refine some comments * refine state_dict code and unittest * add #require gpu, xpu for GradScaler get/set example code * add #require gpu, xpu for GradScaler get/set example code * refine example code * refine unittest for state_dict * refine unittest for state_dict * fix bug of DataLoader in TestGradScalerStateDict * add flag FLAGS_cudnn_deterministic
-
- 05 8月, 2021 1 次提交
-
-
由 Aurelius84 提交于
* Support Mixed Precision training in @to_static * fix block.vars logic * fix GPU training loss diff * remove unused code
-
- 15 7月, 2021 1 次提交
-
-
由 wanghuancoder 提交于
* cache core.ops, test=develop * refine, test=develop
-
- 05 7月, 2021 1 次提交
-
-
由 jiangcheng 提交于
* reduce sum op default fp32, add into amp black list * reduce_sum default fp32 can avoid return inf when the sum value large than 65504
-
- 01 7月, 2021 1 次提交
-
-
由 zhangbo9674 提交于
* add get and set for Grad_scaler * refine some API name and comments * refine API name and comments * refine some comments
-
- 29 6月, 2021 1 次提交
-
-
由 taixiurong 提交于
-
- 21 6月, 2021 1 次提交
-
-
由 cc 提交于
* Combine amp and qat * add unit test
-
- 18 11月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* add matmtl_v2 to amp list * support dygraph
-
- 14 9月, 2020 1 次提交
-
-
由 Zhen Wang 提交于
Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240) * update amp_check_finite_and_scale_op for static_amp. * use amp_check_finite_and_scale in static graph amp. * update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op). * add update_loss_scaling op in cpp. * add update_loss_scaling_op unit test. * update the doc of the check_finite_and_unscale op * Update the process of gradients updating skipping if the gradients have infinite values. * update the way to zero grads. * update test_update_loss_scaling_op.py * add log info when find infinite grads. * add the unit test for UpdateLossScaling Layer.
-
- 13 8月, 2020 1 次提交
-
-
由 Leo Chen 提交于
* add auto_cast, test=develop * add loss scaler, test=develop * add comments, test=develop * refine code, test=develop * refine code, test=develop * do not set flags automatically, test=develop * fix custom op bug, test=develop * add more test, test=develop * refine enable logic, test=develop * enable amp test with GPU, test=develop * add unittest * add test for found_inf * follow comments * follow comments * remove global variable, use singleton * add some notes * update comments * update comments * update comments * add use_dynamic_loss_scaling argument * refine found_inf * refine found_inf
-