1. 24 2月, 2022 9 次提交
    • Z
      Refactored GradNodeAccumulation data structure and behaviour (#39526) · 1abfc8dd
      Zhanlue Yang 提交于
      * Refactored GradNodeAccumulation data structure and behaviour
      
      * Fixed CI issues
      
      * Fix compilation issues
      
      * Fixed minor issues
      
      * Reverted changes for intermediate and OverwriteOutput
      
      * fixed minor issue
      
      * Fixed code format issues
      
      * Fixed CI-Coverage issue
      
      * Fixed CI issues
      1abfc8dd
    • A
      [Phi] Fix XPU OP segmentation Fault problem (#39827) · 7a7a7cad
      Aurelius84 提交于
      * [Phi] Fix XPU OP segmentation Fault problem
      
      * fix cast_op_xpu in kunlun1
      
      * fix cast_op_xpu in kunlun1
      7a7a7cad
    • C
      [pten] add optional type for infermeta (#39848) · 94b31f90
      chentianyu03 提交于
      * modify infershape by args_def
      
      * add optional type for infermate
      
      * add optional type for infermate
      
      * add optional type for infermate
      
      * support scalar type
      
      * change OptionalInputAt function to none template
      
      * support phi::DataType
      94b31f90
    • X
      dd2c997d
    • J
      Fix for split op in BF16 inference (#39548) · 75f91ce4
      jakpiase 提交于
      * Fix for split bf16 inference
      
      * added test for pass
      
      * changes after review
      75f91ce4
    • H
      Optimize where_op and abs_grad_op by the elementwise interface (#39609) · c9699556
      huangxu96 提交于
      * Optimize the where_op by the elementwise_op funtion
      
      * Modified where_op & abs_grad_op by elementwise interface
      c9699556
    • W
      [Eager] save load testcase (#39571) · 6b5749eb
      wanghuancoder 提交于
      * eager, test=develop
      
      * fix bug, test=develop
      
      * eager, test=develop
      
      * merge legacy to fluid
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Refactor TensorAdd func by template and remove gradient_accumulation in eager
      
      * Remove needless target name
      
      * eager, test=develop
      
      * eager, test=develop
      
      * Use overload instead of template
      
      * Remove legacy code
      
      * Remove legacy code
      
      * selectedrows, test=develop
      
      * Remove DataType test
      
      * eager, test=develop
      
      * eager, test=develop
      
      * support gan, test=develop
      
      * Using Tensor directly instead of using EagerTensor
      
      * support gradient_accumulation
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * make test_imperative_lod_tensor_to_selected_rows longer
      
      * refine code
      
      * ptb, test=develop
      
      * Rename all EagerTensor to Tensor
      
      * Rename some EagerTensor to Tensor
      
      * rename EagerTensor to EagerVariable
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * eager, test=develop
      
      * add more test
      
      * eager, test=develop
      
      * Support copiable selected rows and merge develop
      
      * save load, eager, test=develop
      
      * save load, eager, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * revert static_runner, test=develop
      
      * EagerTensor to Tensor, test=develop
      
      * refine, test=develop
      
      * refine, test=develop
      
      * clear grad, test=develop
      
      * merge, develop
      
      * merge, develop
      
      * merge, test=develop
      
      * merge, test=develop
      Co-authored-by: NJiabinYang <360788950@qq.com>
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      6b5749eb
    • N
      Fix a bug in IndexKernel out-of-memory (#39867) · 2136bd42
      niuliling123 提交于
      2136bd42
    • L
      optimize performance of lookup_table_v2_op (#39856) · d6038c22
      Li Min 提交于
      * optimize block config  and fp16 atomicAdd perf for lookup_table_v2_grad.
      d6038c22
  2. 23 2月, 2022 20 次提交
  3. 22 2月, 2022 11 次提交