- 23 9月, 2020 1 次提交
- 
- 
由 Zhang Ting 提交于* add fused_bn_add_relu op 
 
- 
- 14 9月, 2020 1 次提交
- 
- 
由 Zhen Wang 提交于Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240) * update amp_check_finite_and_scale_op for static_amp. * use amp_check_finite_and_scale in static graph amp. * update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op). * add update_loss_scaling op in cpp. * add update_loss_scaling_op unit test. * update the doc of the check_finite_and_unscale op * Update the process of gradients updating skipping if the gradients have infinite values. * update the way to zero grads. * update test_update_loss_scaling_op.py * add log info when find infinite grads. * add the unit test for UpdateLossScaling Layer. 
 
- 
- 03 9月, 2020 1 次提交
- 
- 
由 Zhen Wang 提交于
 
- 
- 15 4月, 2020 1 次提交
- 
- 
由 mapingshuo 提交于* allow amp and recompute working together 
 
- 
- 26 11月, 2019 1 次提交
- 
- 
由 Zhen Wang 提交于* fix some typos in AMP. test=develop * delete useless codes. test=develop 
 
- 
- 30 10月, 2019 1 次提交
- 
- 
由 gongweibao 提交于* add custom black varname test=develop * fix dtype test=develop * fix num test=develop * fix ut test=develop * fix coverage test=develop * fix blackvar names test=develop 
 
- 
- 19 9月, 2019 1 次提交
- 
- 
由 Jie Fang 提交于Optimize amp for multi-gpu to enable FP16 gradients transfer across gpus 
 
- 
- 06 9月, 2019 1 次提交
- 
- 
由 Jie Fang 提交于init new amp, optimize inserting cast op for batchnorm 
 
- 
- 03 9月, 2019 1 次提交
- 
- 
由 gongweibao 提交于Change backward_guard to optimize_guard to maximize the allreduce overlap 
 
- 
- 28 6月, 2019 1 次提交
- 
- 
由 Jie Fang 提交于test=develop 
 
- 
- 25 6月, 2019 1 次提交
- 
- 
由 Jie Fang 提交于test=develop 
 
- 
- 16 5月, 2019 1 次提交
- 
- 
由 Jie Fang 提交于* init auto loss scaling test=develop * change API.spec * change ifelse to switch and use reduce_sum to optimize checking isfinite test=develop * Remove redundant code test=develop 
 
- 
- 25 4月, 2019 1 次提交
- 
- 
由 Yibing Liu 提交于* Init mixed precision training interface * Add fp16 test script test=develop * All initializers support float16 test=develop * Code cleanup & add more code annotations test=develop * Update API spec test=develop * Add usage example in doc test=develop 
 
- 
