1. 11 11月, 2021 1 次提交
  2. 10 11月, 2021 5 次提交
  3. 09 11月, 2021 5 次提交
  4. 08 11月, 2021 7 次提交
    • W
      Use cuda virtual memory management and merge blocks (#36189) · a1ec1d5a
      wanghuancoder 提交于
      * Use cuda virtual memory management and merge blocks, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * window dll, test=develop
      
      * fix cuda error of CUDA_ERROR_NOT_INITIALIZED, test=develop
      
      * use autogrowthv2 for system allocator, test=develop
      
      * remove ~CUDAVirtualMemAllocator(), test=develop
      
      * refine, test=develop
      
      * fix cuda error of CUDA_ERROR_NOT_INITIALIZED, test=develop
      
      * fix cuda error of CUDA_ERROR_NOT_INITIALIZED, test=develop
      
      * fix bug, test=develop
      
      * revert system allocator, test =develop
      
      * revert multiprocessing, test=develop
      
      * fix AutoGrowthBestFitAllocatorV2 mutxt, test=develop
      
      * catch cudaErrorInitializationError when create allocator, test=develop
      
      * fix cuMemSetAccess use, test=develop
      
      * refine cuda api use, test=develop
      
      * refine, test=develop
      
      * for test, test=develop
      
      * for test, test=develop
      
      * switch to v2, test=develop
      
      * refine virtual allocator, test=develop
      
      * Record cuMemCreate and cuMemRelease, test=develop
      
      * refine, test=develop
      
      * avoid out of bounds, test=develop
      
      * rename allocator, test=develop
      
      * refine, test=develop
      
      * use PADDLE_ENFORCE_CUDA_SUCCESS, test=develop
      
      * for test,test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      a1ec1d5a
    • L
      【fix-bug】Support attn_mask=None input cases for fused_attention_op. (#36951) · 472dcca4
      Li Min 提交于
      目前的fused_attention_op不支持attn_mask=None的输入,本PR对此进行了补充,并补充了相应的单测逻辑。
      472dcca4
    • W
      add pass and mkldnn base ut. (#36967) · b7e88308
      Wilber 提交于
      b7e88308
    • K
      avoid setting logging.basicConfig (#37031) · 1305b4f5
      kuizhiqing 提交于
      1305b4f5
    • 0
      94bcc2ab
    • Z
      aef8bf2a
    • X
      Add Support for OperatorBase in new executor (#36945) · 251f68e7
      xiongkun 提交于
      * add scope as membership
      
      * functions complete
      
      * fix bugs: garbage collectior
      
      * deal unknow variable holder
      
      * add
      
      * 1. add unittest for operator_base
      
      * code format
      251f68e7
  5. 06 11月, 2021 1 次提交
  6. 05 11月, 2021 4 次提交
  7. 04 11月, 2021 4 次提交
  8. 03 11月, 2021 4 次提交
  9. 02 11月, 2021 7 次提交
    • N
      add shufflenetv2 (#36067) · 5c4c55f9
      Nyakku Shigure 提交于
      add shufflenetv2
      Co-authored-by: Ainavo's avatarAinavo <ainavo@163.com>
      Co-authored-by: Npithygit <pyg20200403@163.com>
      Co-authored-by: Ainavo's avatarAinavo <ainavo@163.com>
      Co-authored-by: Npithygit <pyg20200403@163.com>
      5c4c55f9
    • J
      [Need review] Added conv + hard_sigmoid oneDNN fuse pass (#36869) · 53690719
      jakpiase 提交于
      * added conv + hard_sigmoid fuse pass
      
      * Removed IsOptional() statements
      
      * Reverted removing optional
      53690719
    • Y
      [PaddlePaddle hackathon] Add randint_like (#36169) · 41a09113
      yujun 提交于
      * add randint like
      
      * rm .cc .cu
      
      * Update unity_build_rule.cmake
      
      * try to make test pass
      
      * use python
      
      * update
      
      * update randint_like
      
      * rename test_randint_like_op -> test_randint_like
      
      * update
      
      * update randint like docs
      
      * update randint like
      
      * update
      
      * update
      
      * add bool
      
      * update randint like test
      
      * update
      
      * update
      41a09113
    • L
      dc08c187
    • J
      Correct conv2d int8 mkldnn UT (#36711) · a4c3e038
      joanna.wozna.intel 提交于
      * Refactor conv2d int8 unit test
      
      * Correct according to review and add int8 check
      a4c3e038
    • Z
      [AutoParallel] Save&Load Module (#36558) · b9defb4f
      zhaoyingli 提交于
      * AutoParallel Save&Load
      
      * tiny modi
      
      * update func name
      
      * tiny fix
      
      * add NotImplementedError
      
      * fix doc
      
      * update func name
      
      * update func param
      
      * update interface
      
      * add unitest & modi make_data_unshard
      
      * update unittest
      
      * update unittest
      
      * fix unittest
      
      * fix cmakelist
      
      * update unittest
      b9defb4f
    • W
      Add dygraph triple grad test (#36814) · 8fb6e77b
      Weilong Wu 提交于
      * native commit for triple grad of sigmod
      
      * Updated unittests files
      
      * init functional jacobian api
      
      * Updated trible_test func
      
      * Updated gradient_checker & test_script
      
      * finish test with dtype float32
      
      * add float64 test case
      
      * polish code
      
      * use atol=1e-5 with dtype float64
      
      * fix for ci
      
      * set timeout for test_jacobian
      
      * fix dygraph grad to support high differential
      
      * polish API docstring
      
      * Updated gradient checker and some related files
      
      * fix double grad strip error for high differential
      
      * fix double grad strip error for high differential
      
      * Add Sigmoid triple grad tests
      
      * fix dygraph double grad dtype error when calling for high differential senario
      
      * Updated triple grad teses func
      
      * Use np.random to initialize ddx
      
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * format python code
      
      * support multi input in triple gradient checker
      
      * Add matmul triple grad kernel
      
      * Updated comments of TODO
      
      * Supported some special tests
      
      * Change code-format to follow CI std
      
      * Updated gradient_checker.py
      
      * Fix conflicts
      
      * Removed unnecessary printing log
      
      * Change code style to follow CI std
      
      * Add Dygraph Triple Grad test
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      Co-authored-by: NJiabin Yang <360788950@qq.com>
      8fb6e77b
  10. 01 11月, 2021 2 次提交
    • L
      [new-exec] refine vlog of interpretercore (#36865) · 4c93c4c3
      Leo Chen 提交于
      * refine vlog of interpretercore
      
      * fix ut
      4c93c4c3
    • W
      Support matmul_v2 triple grad Kernel (#36459) · 203a0e3e
      Weilong Wu 提交于
      * native commit for triple grad of sigmod
      
      * Updated unittests files
      
      * init functional jacobian api
      
      * Updated trible_test func
      
      * Updated gradient_checker & test_script
      
      * finish test with dtype float32
      
      * add float64 test case
      
      * polish code
      
      * use atol=1e-5 with dtype float64
      
      * fix for ci
      
      * set timeout for test_jacobian
      
      * fix dygraph grad to support high differential
      
      * polish API docstring
      
      * Updated gradient checker and some related files
      
      * fix double grad strip error for high differential
      
      * fix double grad strip error for high differential
      
      * Add Sigmoid triple grad tests
      
      * fix dygraph double grad dtype error when calling for high differential senario
      
      * Updated triple grad teses func
      
      * Use np.random to initialize ddx
      
      * Updated triple_grad_check func
      
      * add todo for gradient checker and refine some comments
      
      * remove additional code
      
      * add test for warnging in backward.py
      
      * format python code
      
      * support multi input in triple gradient checker
      
      * Add matmul triple grad kernel
      
      * Updated comments of TODO
      
      * Supported some special tests
      
      * Change code-format to follow CI std
      
      * Updated gradient_checker.py
      
      * Fix conflicts
      
      * Removed unnecessary printing log
      
      * Change code style to follow CI std
      Co-authored-by: Nlevi131 <limaolin01@baidu.com>
      Co-authored-by: NJiabin Yang <360788950@qq.com>
      203a0e3e