1. 29 10月, 2020 2 次提交
  2. 28 10月, 2020 1 次提交
  3. 27 10月, 2020 2 次提交
    • Z
      add Fuse bn add act pass (#28196) · fdc06f21
      Zhang Ting 提交于
      * add fuse_bn_add_act pass
      fdc06f21
    • C
      Enrich the python error types of paddle & polish format (#28124) · 813b2ade
      Chen Weihang 提交于
      * add multiple exception type
      
      * define all exception & polish compile pystack
      
      * mapping paddle error to python exception
      
      * polish static mode error format
      
      * fix failed unittests
      
      * fix dytostatic test_error
      
      * fix check_nan_inf failed
      
      * add unittest for coverage
      
      * revert some code try to solve compile error
      
      * refactor enforce & error change
      
      * polish code & add unittest
      813b2ade
  4. 26 10月, 2020 1 次提交
  5. 21 10月, 2020 1 次提交
  6. 16 10月, 2020 1 次提交
  7. 15 10月, 2020 2 次提交
  8. 14 10月, 2020 3 次提交
    • G
      Implement the function of OutScaleForTraining/OutScaleForInference in dygraph (#26601) · 6bbb6e7f
      guofei 提交于
      * Implement the function of OueScaleForTraining/OutScaleForInference in dygraph
      
      test=develop
      6bbb6e7f
    • C
      Remove and reorganize the alias of APIs (#27717) · d05058d2
      chentianyu03 提交于
      * modify cond while_loop to paddle.static.nn.cond
      
      * modify crop_tensor to paddle.crop
      
      * modify Variable to paddle.static.Variable
      
      * remove nn.beam_search, nn.beam_search_decode, nn.gather_tree
      
      * remove bpr_loss, center_loss, rank_loss, smooth_l1, teacher_student_sigmoid_loss, edit_distance, sampled_softmax_with_cross_entropy in nn.functional
      
      * remove apis in nn.functional.learn_rate.py
      
      * remove pool2d, pool3d, adaptive_pool2d, adaptive_pool3d in nn.functional
      
      * remove apis in nn.functional.vision
      
      * remove erf, soft_relu in nn.functional.activation
      
      * remove apis in nn.functional.extension
      
      * remove nn.functional.rnn
      
      * remove hash from nn.functional.lod
      
      * remove row_conv from nn.functional.extension
      
      * remove one_hot, pad2d, pad_constant_like from nn.functional.common
      
      * remove nn.gather_tree, nn.BilinearTensorProduct, nn.Pool2D, nn.Pad2D
      
      * remove apis from optimizer.__init
      
      * remove tensor.creation.fill_constant
      
      * remove elementwise_mul in nn.functional.common and  modify to paddle.multiply
      
      * remove  tensor.stat.reduce_mean
      
      * remove reduce_all, reduce_any in tensor.logic
      
      * remove apis in tensor.math
      
      * remove apis in tensor.__init__
      
      * remove has_inf, has_nan in tensor.search
      
      * remove apis in framework.__init__
      
      * remove apis in paddle.__init__
      
      * remove apis in nn.functional.__init__
      
      * modify removed alias apis to raw api in doc and unittests
      
      * fix remove grid_sample bug
      
      * modify removed alias apis to raw api in doc and unittests
      
      * modify removed alias apis to raw api in doc and unittests
      
      * modify removed alias apis to raw api in doc and unittests
      
      * modify removed alias apis to raw api in doc and unittests
      
      * modify removed alias apis to raw api in doc and unittests
      
      * modify removed alias apis to raw api in doc and unittests
      
      * delete alias api relastions in doc
      
      * reserve paddle.compat, paddle.sysconfig
      
      * remove unittest for paddle.reduce_all, paddle.reduce_any
      
      * modify removed alias apis to raw api in doc and unittests
      
      * recover paddle.save and paddle.load
      
      * resolve conflicts
      
      * fix sample code missing paddle.enable_static() bug
      
      * fix sample code missing paddle.enable_static() bug
      
      * fix to_string sample code error
      d05058d2
    • L
      Support setting xpu place in dygraph mode (#27909) · 9a2a4b5f
      Leo Chen 提交于
      * support setting xpu place
      
      * add ut, test=kunlun
      9a2a4b5f
  9. 13 10月, 2020 2 次提交
  10. 12 10月, 2020 3 次提交
  11. 10 10月, 2020 1 次提交
  12. 30 9月, 2020 1 次提交
  13. 29 9月, 2020 3 次提交
  14. 28 9月, 2020 5 次提交
  15. 27 9月, 2020 1 次提交
    • L
      add support to float64 input of warpctc op. (#27399) · 1501a80f
      Li Fuchen 提交于
      * add float64 input to ctc_loss
      
      * modified error message of  warpctc
      
      * update repo and tag of warpctc
      
      * add test for warpctc with float64 input
      
      * modified warpctc.cmake to make sure build always
      
      * resolved sample code bug of warpctc
      
      * add core.ops in warpctc dygraph
      
      * fix a bug of test
      1501a80f
  16. 26 9月, 2020 1 次提交
  17. 23 9月, 2020 1 次提交
  18. 21 9月, 2020 2 次提交
    • L
      [Feature] Enhance inplace addto strategy for gradient accumulation in static graph (#27112) · aba759ba
      Leo Chen 提交于
      * support use add instead of sum to do gradient accumulation
      
      * add inplace addto pass
      
      * add grad_add op and inplace addto pass
      
      * remove debug code
      
      * code refine
      
      * fix bug when sereral sum ops inserts at same op_idx
      
      * fix Flags type
      
      * add addto attribute for conv3d
      
      * fix ut
      
      * code clean
      
      * fix type
      aba759ba
    • H
      Quant op dev (#25932) · 02606d45
      huangxu96 提交于
      * Finished ChannelWiseQuantDequantAbsMaxOp and Passed unittests.
      
      * Finished channel-wise quantize strategy in imperative quantization.
      
      * Added Cuda code of ChannelWiseQuantDequantMaxAbsOP
      Add Cuda code of ChannelWiseQuantDequantMaxAbsOp
      
      * Add quant_axis for channel_wise quant.
      
      * fixed a bug in unnitests, which will not trigger axis = 1 case and cannot meet the coverage rate requirement.
      
      * Added some assert infomation and fixed some coding style mistakes.
      02606d45
  19. 15 9月, 2020 1 次提交
  20. 14 9月, 2020 2 次提交
    • J
      Add bfloat16 passes (#26999) · 1483ea23
      joanna.wozna.intel 提交于
      1483ea23
    • Z
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for... · d708b210
      Zhen Wang 提交于
      Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240)
      
      * update amp_check_finite_and_scale_op for static_amp.
      
      * use amp_check_finite_and_scale in static graph amp.
      
      * update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op).
      
      * add update_loss_scaling op in cpp.
      
      * add update_loss_scaling_op unit test.
      
      * update the doc of the check_finite_and_unscale op
      
      * Update the process of gradients updating skipping if the gradients have infinite values.
      
      * update the way to zero grads.
      
      * update test_update_loss_scaling_op.py
      
      * add log info when find infinite grads.
      
      * add the unit test for UpdateLossScaling Layer.
      d708b210
  21. 08 9月, 2020 1 次提交
  22. 07 9月, 2020 1 次提交
  23. 04 9月, 2020 1 次提交
  24. 03 9月, 2020 1 次提交