1. 09 4月, 2020 1 次提交
  2. 08 4月, 2020 3 次提交
  3. 05 4月, 2020 1 次提交
  4. 04 4月, 2020 2 次提交
  5. 03 4月, 2020 1 次提交
    • C
      update linspace, equal operators to API 2.0 (#23274) · a2e10930
      channings 提交于
      * update linspace, equal operators to API 2.0, test=develop
      
      * equal support higher performance CUDA kernel, test=develop
      
      * update comment of equal&linspace operator, test=develop
      
      * update comment of equal&linspace operator, test=develop
      a2e10930
  6. 01 4月, 2020 1 次提交
  7. 09 3月, 2020 1 次提交
  8. 12 2月, 2020 1 次提交
    • G
      Add support for dynamic_decode(while) training. (#22231) · 31b54646
      Guo Sheng 提交于
      * Add support for dynamic_decode(while) training. test=develop
      
      * Fix assign_op and tensor_array_read_write_op after solving conflict. test=develop
      
      * Fix test_rnn_decode_api.py. test=develop
      
      * Refine docs for apis in rnn.py. test=develop
      
      * Adjust outputs of dynamic_decode. test=develop
      
      * Remove the force_cpu update in assign_op. test=develop
      
      * Remove the force_cpu update in assign_op. test=develop
      
      * Make RNNCell.get_initial_states support batch_dim_idx argument. test=develop
      
      * Rename _create_array_outof_while as _create_array_out_of_while in rnn.py.
      test=develop
      31b54646
  9. 16 1月, 2020 1 次提交
  10. 06 1月, 2020 1 次提交
  11. 24 12月, 2019 1 次提交
  12. 19 12月, 2019 1 次提交
  13. 17 12月, 2019 1 次提交
  14. 06 12月, 2019 2 次提交
    • Z
      Polish op registry codes (#21561) · 0f888836
      Zeng Jinle 提交于
      * polish infer shape registry, test=develop
      
      * modify some operators registry, test=develop
      0f888836
    • H
      Add Much Complex Test and Fix Bugs for Control Flow cond API (#21532) · 1dcf6a72
      Huihuang Zheng 提交于
      Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests.
      
      Fix bugs:
      
      1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op.
      
      2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var.
      
      This PR also did some code clean up:
      1. Print the var name when sgd_op catches shape error so that it is easier to debug
      2. Fix a typo: dicta -> dict
      1dcf6a72
  15. 29 11月, 2019 2 次提交
  16. 31 10月, 2019 1 次提交
    • H
      GradMaker for dygraph (#19706) · 8c4573a3
      hong 提交于
      * refactor dygraph,test=develop
      
      * fix failed unittest,test=develop
      
      * polish code,test=develop
      
      * check windows ci error,test=develop
      try to fix windows ci error by np.allclose,test=develop
      
      * polish vlog and profiler, test=develop
      
      * try to fix preceding ops order,test=develop
      
      * test transformer in windows ci, test=develop
      
      * use python c-api to speed up tracer.trace,test=develop
      
      * test=develop, fix docker with paddle nccl problem
      
      * test=develop, add ut for debug string and gradient_accumulator
      
      * test=develop, add tests for layer/gradient_accumulator/prepared_op
      
      * test=develop, fix complie error for test_prepared_op
      
      * test=develop, add more ut for dygraph
      
      * test=develop, create API.spec for dygraph api change
      
      * optimize grad maker; test=develop
      
      * optimize grad maker
      
      * test
      
      * grad make optim; test=develop
      
      * fix unittest bugs; test=develop
      
      * add dygraph grad op maker and split_op
      
      * grad op maker refactor; test=develop
      
      * add dygraph grad maker; test=develop
      
      * fix op deformable_conv_v1_op bug; test=develop
      
      * fix deformable_conv prroi pool bugs;
      
      * fix new op grad op maker bug; test=develop
      
      * fix split by ref bug; test=develop
      
      * fix dygraph auto prune bug; test=develop
      
      * fix test_trace bug; test=develop
      
      * fix fused emb seq pool bug; test=develop
      
      * remove useless code in op_desc file; test=develop
      
      * remove useless code, StrVarBaseNode; test=develop
      
      * fix review issues; test=develop
      
      * fix rank_loss grad maker; test=develop
      
      * remove flag in VarBase; test=develop
      
      * fix distributed_notify_op compile bug ; test=develop
      
      * fix reshape op double grad; test=develop
      
      * fix expand as op; test=develop
      
      * add impertive type_defs.h for demo_train; test=develop
      
      * fix inference lib cmake; test=develop
      
      * fix inference lib; test=develop
      
      * fix infernce_lib; test=develop
      
      * fix inference cmake; test=develop
      
      * fix inference lib; test=develop
      
      * fix inference lib; test=develop
      
      * remove condition dygraph grad maker, modify local name; test=develop
      
      * fix split grad maker bug; test=develop
      
      * fix pyramid_op bug; test=develop
      
      * change travis time out limit; test=develop
      
      * restore travis; test=develop
      
      * change timeout limit; test=develop
      8c4573a3
  17. 29 10月, 2019 1 次提交
    • Y
      Check and correct the output's lod_level in DynamicRNN related operators (#19144) · 6fcfd32e
      Yiqun Liu 提交于
      * Refine the InferShape of ReadFrom and WriteTo op, and add comment to explain why not call ShareLoD for runtime.
      test=develop
      
      * Add comment for ReorderLoDTensorByRank op.
      
      * Add comment for lod_tensor_to_tensor_array op to explain why only call DecreaseLoDLevel for compile time.
      test=develop
      
      * ShrinkRNNMemory op should call ShareLoD for compile time.
      test=develop
      
      * Add the implementation of IncreaseLoDLevel and add the compile-time check of lod_level in InferShape of sequence_pool.
      test=develop
      
      * Refine the unittest of DynamicRNN.
      test=develop
      
      * Change PADDLE_ENFORCE to PADDLE_ENFORCE_NE.
      test=develop
      6fcfd32e
  18. 14 10月, 2019 1 次提交
  19. 11 10月, 2019 1 次提交
    • W
      modify english api (#20159) · 2893cd1a
      Wilber 提交于
      * modify english api test=develop test=document_fix
      
      - leaky_relu
      - less_than
      - log
      - logical_and
      - logical_or
      - logical_xor
      - logical_not
      2893cd1a
  20. 18 9月, 2019 1 次提交
  21. 05 9月, 2019 1 次提交
  22. 30 8月, 2019 1 次提交
    • J
      [MKL-DNN] Fix to face model on AVX512 platforms (#19282) · ecd9f330
      Jacek Czaja 提交于
      - Refactor step 1
      
      - Compilation fix
      
      - Yet another compilation fix
      
      - Even more compilation fix
      
      - Lint fixes
      
      test=develop
      
      - Removed deprectaed PADDLE_ENFORCE occurance
      
      test=develop
      
      - Candidate fix to BN forward
      
      - Lint fixes
      
      test=develop
      
      - Refactoring in data_layout_transform
      
      - compilation fix
      
      - Another comppilation fix
      
      - Step further into darkness
      
      - Yet another compilation fix
      
      - Yet another compilation fix
      
      - missing header
      
      - compilation fix
      
      - Added MKLDNN -> Paddle conversion in fetch op
      
      test=develop
      
      - Compilation fix
      
      test=develop
      
      - Lint
      
      test=develop
      
      - Mul fix
      
      - Fix to MKLDNN MUL op and Elementwise MUL UT
      
      test=develop
      
      - Workaround for diffrent weights with groups representation Paddle vs
        MKL-DNN.
      
      test=develop
      
      - Candidate fix for 5D convolution with groups
      
      - Refactor of fix for conv3d and conv2d in fetch op
      
      test=develop
      
      - Compilation fix
      
      - Still same compilation fix
      
      - Compilation fix
      
      - Compilation fix
      
      - Reverted refactoring of fixes
      
      - Adapted test_conv2d_int8_mkldnn so it exects data in NCHW format
        not NHWC
      
      test=develop
      
      - minor fix in UT
      
      test=develop
      
      - Lint fixes
      
      test=develop
      ecd9f330
  23. 29 8月, 2019 1 次提交
  24. 02 8月, 2019 1 次提交
    • Z
      Open gc by default (#18836) · 7ac748ad
      Zeng Jinle 提交于
      * open gc by default, test=develop
      
      * fix test_train_recognize_digits and disable gc when ngraph is enabled, test=develop
      
      * fix conditional_block op eager deletion bug, test=develop
      
      * add some comments to reviewers, test=develop
      7ac748ad
  25. 19 7月, 2019 1 次提交
    • H
      Support memory eager deletion on recurrent OP (#17710) · 89bc3fd8
      Huihuang Zheng 提交于
      Test PaddingRNN on V100 GPU device.
      
      Test configuration: large model, padding mode (which is the mode using recurrentOp), one GPU.
                         
      GPU memory (MiB):   6414 (this PR)     vs   6837 (without this PR)
      Speed (steps/s):         10.28 (this PR)    vs    9.89 (without this PR)
       
      89bc3fd8
  26. 08 7月, 2019 1 次提交
  27. 10 5月, 2019 1 次提交
    • Q
      Double backward of conv2d. (#17211) · e32c9888
      qingqing01 提交于
      * Add conv2d_grad_grad_op
      * Extracte the cuDNN conv algo searching code in conv_cudnn_helper.h.
          - Now use it in conv2d_grad_grad.
          - Will simply the searching code in conv2d and conv2d_grad in next PR.
      * Enhance and fix bug in unit testing of gradient_checker.
      * Support to fetch empty variables,return None in Python.
      e32c9888
  28. 17 4月, 2019 1 次提交
  29. 16 4月, 2019 3 次提交
  30. 13 4月, 2019 1 次提交
  31. 04 4月, 2019 1 次提交
  32. 28 3月, 2019 1 次提交
  33. 27 3月, 2019 1 次提交