1. 15 3月, 2023 2 次提交
  2. 13 3月, 2023 3 次提交
  3. 10 3月, 2023 1 次提交
  4. 09 3月, 2023 2 次提交
  5. 08 3月, 2023 1 次提交
  6. 07 3月, 2023 1 次提交
  7. 03 3月, 2023 1 次提交
  8. 02 3月, 2023 2 次提交
    • C
      Add prim test for elementwise ops (#50807) · b8713309
      Charles-hit 提交于
      * fix prim_op_test when python api outs is different with kernel sig
      
      * add elementwise op prim test
      
      * fix unit test
      
      * add bfloat16 for full in static  prim api
      
      * empty-commit
      
      * close bf16 test
      
      * polish elementwise tests
      b8713309
    • W
      Add concat grad cinn (#50972) · a4689c90
      wangzhen38 提交于
      * [cinn] concat_grad
      
      * [cinn] concat_grad
      
      * [cinn] concat_grad build success
      
      * [Add PGLBOX] fix unnitest
      
      * [Add PGLBOX] fix unnitest
      
      * [Add PGLBOX] fix codestyle
      
      * [cinn] update by comments
      
      * [cinn] update by comment
      
      * [cinn] add axis check
      a4689c90
  9. 01 3月, 2023 3 次提交
    • C
      Integration flash attention (#49869) · 61611786
      Chitsing KUI 提交于
      * flash attn
      
      * seed
      
      * almost
      
      * softmax
      
      * fix workspace
      
      * add unitest; linux only
      
      * fix setup
      
      * fix datatype include
      
      * fix setup typo
      
      * fix def scope
      
      * new error api
      
      * use paddle fork
      
      * fix attr bug; complete ut
      
      * update flash hash
      
      * fix rng reset
      
      * fix offset
      
      * fix comments
      61611786
    • Z
      add topk prim backward (#50679) · 296b3ff0
      zqw_1997 提交于
      * tmp gather vjp
      
      * support gather
      
      * remove useless code
      
      * fix compiling error
      
      * fix ut
      
      * add eager test
      
      * add eager test
      
      * add seed
      
      * small change
      
      * fix cpu error
      
      * fix transpose op compat
      
      * remove tensor index case
      
      * fix prim_cinn
      
      * small commit
      
      * add cumsum prim backward
      
      * small commit
      
      * skip aixs=None test case
      
      * fix op generante eror
      
      * fix static test error
      
      * remove unused code
      
      * fix static test error
      
      * small commit
      
      * skip cpu float16 test case
      
      * skip eager cpu cumsum float16 test case
      
      * add eager and static UT
      
      * fix ut
      
      * add composite backward rule
      
      * fix error
      
      * fix type error and format error
      
      * add try cpu+float16 test
      
      * fix test bugs
      
      * remove test for cpu+float16 and make y[0] be the grad arg
      
      * add cinn test
      
      * fix UT
      
      * fix the wrong dim of v in test cases
      
      * change y[0] to y[1] for grad in UT
      
      * reshape flatten out
      
      * Disable cinn single test
      
      * use scatter_nd_add
      
      * modify the reshape part of topk_grad
      
      * delete useless build file
      
      * to make the syntax right
      
      * modify bug
      
      * try use of put_along_axis
      
      * remove cinn test
      
      * reformat todo
      
      * add silu composite rule
      
      * fix code style.
      
      * add cinn test
      
      * fix composite grad maker code gen
      
      * add prim in cumsum op test
      
      * remove old test
      
      * fix typro
      
      * pass the static test
      
      * fix typro
      
      * modify optest and delete old test files
      
      * remove normal test_top_k_op test
      
      * fix typro
      
      * pass axis=None test case
      
      * buffer comment
      
      * for debug
      
      * add silu fp16 unit test.
      
      * add static guard
      
      * remove forward prim test
      
      * remove same name axis
      
      * modify the test_top_v2_op.py to pass all local tests
      
      * delete the useless testcase
      
      * fix mistake
      
      * add more testcases to test dtype16 and dtype32
      
      ---------
      Co-authored-by: NJiabinYang <360788950@qq.com>
      Co-authored-by: NGGBond8488 <857631483@qq.com>
      Co-authored-by: Nzxcd <228587199@qq.com>
      Co-authored-by: NCharles-hit <wanghao107@baidu.com>
      296b3ff0
    • Y
      Add full_like composite rule (#50794) · 7468bab4
      Yichen Zhang 提交于
      * implement composite full_like and simple unit test
      
      * implement op tests for composite full_like op
      
      * some modification as reviewers suggested
      add cinn op test to CMakeLists.txt
      fix code style
      
      * fix code style
      
      * modify input args of prim fill_any_like op
      
      * resolve conflicts
      
      * resolve conflicts
      
      * modify python api and unit tests as suggested
      
      * resolve conflicts
      
      * resolve conflicts
      
      * use framework.dtype to convert dtype in Op test
      7468bab4
  10. 27 2月, 2023 1 次提交
  11. 24 2月, 2023 1 次提交
    • X
      【prim】Slice grad (#50771) · f6dea800
      xiaoguoguo626807 提交于
      * support prim test in OpTest
      
      * fix cmake
      
      * fix op test
      
      * fix test_input_spec
      
      * disable cinn in reduce_sum unit test
      
      * add bfloat16 dtype for sum
      
      * add approve rules
      
      * polish code
      
      * add clear jit program function
      
      * convert grad out from tensor to numpy
      
      * remove unnecessary code
      
      * add only_prim flag
      
      * fix flag
      
      * fix op test
      
      * add attr
      
      * fix optest comp inplace error
      
      * fix op test
      
      * fix op test with guard
      
      * add initialization of check_comp flag
      
      * fix comp inplace error in op test
      
      * rename check_comp with check_prim and add bfloat16 dtype convert
      
      * rename comp_op_type to prim_op_type
      
      * rename comp to prim
      
      * remove useless code
      
      * skip ci check for only prim
      
      * add no_grad_vars and grad_outputs in prim test
      
      * fix var_dict
      
      * fix op test for only_prim
      
      * fix dy2static bugs
      
      * polish some code
      
      * temp
      
      * modify op test
      
      * except cinn test
      
      * modify bfp16
      
      * modify pad grad
      
      * add pad_grad dtype
      
      * start cinn part
      
      ---------
      Co-authored-by: NCharles-hit <wanghao107@baidu.com>
      f6dea800
  12. 21 2月, 2023 2 次提交
    • K
      remove test_fetch_unmerged (#50619) · 9af23f1d
      kangguangli 提交于
      9af23f1d
    • C
      [OpTest] support prim test in OpTest (#50509) · 457defe7
      Charles-hit 提交于
      * support prim test in OpTest
      
      * fix cmake
      
      * fix op test
      
      * fix test_input_spec
      
      * disable cinn in reduce_sum unit test
      
      * add bfloat16 dtype for sum
      
      * polish code
      
      * add clear jit program function
      
      * convert grad out from tensor to numpy
      
      * remove unnecessary code
      
      * add only_prim flag
      
      * fix flag
      
      * fix op test
      
      * fix optest comp inplace error
      
      * fix op test
      
      * fix op test with guard
      
      * add initialization of check_comp flag
      
      * fix comp inplace error in op test
      
      * rename check_comp with check_prim and add bfloat16 dtype convert
      
      * rename comp_op_type to prim_op_type
      
      * rename comp to prim
      
      * remove useless code
      
      * skip ci check for only prim
      
      * add no_grad_vars and grad_outputs in prim test
      
      * fix var_dict
      
      * fix op test for only_prim
      
      * fix dy2static bugs
      
      * polish some code
      457defe7
  13. 20 2月, 2023 1 次提交
  14. 14 2月, 2023 1 次提交
  15. 10 2月, 2023 1 次提交
  16. 07 2月, 2023 1 次提交
  17. 28 1月, 2023 1 次提交
  18. 17 1月, 2023 1 次提交
    • P
      support CUDA Graph for new executor (#49708) · 8e5ed04d
      pangyoki 提交于
      * new exe supports CUDA Graph
      
      * fix
      
      * fix
      
      * fix
      
      * fix FLAGS_use_stream_safe_cuda_allocator in unittest
      
      * insert output of coalesce_tensor op to skip_gc_var
      
      * fix
      8e5ed04d
  19. 13 1月, 2023 1 次提交
  20. 10 1月, 2023 1 次提交
  21. 09 1月, 2023 1 次提交
    • J
      Prim paddle Basic (#49272) · 2f601282
      Jiabin Yang 提交于
      * proto type of composite grad in paddle
      
      * proto type of composite grad in paddle
      
      * refactor composite api with phi
      
      * fix compile error
      
      * support static graph code-gen for squeeze op
      
      * generate static graph code of unsqueeze
      
      * refine op name
      
      * fix compile error
      
      * add extra output in op_compat
      
      * remove debug log
      
      * fix clang compile error
      
      * support prim switch flag
      
      * support prim switch flag
      
      * fix dygraph error
      
      * merge develop
      
      * add code_gen
      
      * add necessary files without codegen
      
      * fix code_gen bug
      
      * add deps
      
      * modify igmnore
      
      * add ignore
      
      * delete std cout
      
      * add composite logic for backward.py
      
      * add tanh first order grad composite
      
      * support enable_prim flag for static graph
      
      * throw expection when both GrapOpMaker and GradCompOpMaker not been registered
      
      * reorganize the directory of prim api tests
      
      * fix windows error
      
      * add eager_utils
      
      * add eager_utils
      
      * modify code gen
      
      * add composite parse
      
      * add unittest for get_grad_op_desc
      
      * code optimize
      
      * fix static test on windows
      
      * support generate static graph code for imag and real op
      
      * fix windows compile error in test_static_prim
      
      * merge develop
      
      * disable test eager in inference
      
      * prim code gen
      
      * disable eager compile in inference
      
      * rm other file
      
      * rm gitignore file
      
      * code_style
      
      * add eager test
      
      * code_style
      
      * merge develop
      
      * remove useless files
      
      * modify static test
      
      * support bool flag from singlton
      
      * merge develop
      
      * recover git ignore
      
      * fix conflict
      
      * recover git ignore for generated op
      
      * fix test compile error
      
      * remove some tests
      
      * add python test
      
      * fix some name issue
      
      * add composite code gen
      
      * modify backward yaml
      
      * fix static composite grad maker code gen
      
      * remove addtional files
      
      * add some static funcs unit test
      
      * fix some bugs
      
      * fix composite grad maker register code gen
      
      * optimize some functions
      Co-authored-by: Nzyfncg <zhangyunfei07@baidu.com>
      Co-authored-by: Nwangruting <wangruting@baidu.com>
      Co-authored-by: Ncxxly <chenxx_id@163.com>
      Co-authored-by: Ncharles-hit <wanghao107@baidu.com>
      Co-authored-by: Nxiaoguoguo626807 <100397923+xiaoguoguo626807@users.noreply.github.com>
      2f601282
  22. 07 1月, 2023 1 次提交
    • R
      Enable standalone executor for fleet training (#49293) · 67fc8e93
      Ruibiao Chen 提交于
      * Enable standalone executor for fleet training
      
      * Update code
      
      * Replace use_standalone_executor utils in auto parallel
      
      * Update code
      
      * Diable standalone executor for test_pass_sharding
      
      * Update code
      
      * Set sequential run for auto parallel
      
      * Fix dist_attr bug
      
      * Set sequential run for auto parallel
      67fc8e93
  23. 03 1月, 2023 1 次提交
    • G
      Move out sequential and replace save_dygraph and load_dygraph (#48709) · 7ff66973
      GGBond8488 提交于
      * remove fluid.save_dygraph and fluid.load_dygraph use paddle.save and paddle.load instead
      
      * move Sequential to paddle.nn
      
      * modify convert_call_func.py Sequential reference
      
      * remove related unitests
      
      * remove fluid.dynamic.Sequntial
      
      * test remove conver_call_func
      
      * fix conflicts
      
      * fix typro
      
      * fix unitests
      
      * fix sample_code
      
      * fix unitest
      
      * fix __init__
      7ff66973
  24. 23 12月, 2022 1 次提交
  25. 22 12月, 2022 1 次提交
  26. 21 12月, 2022 2 次提交
    • L
      Merge gpugraph to develop (#48507) · 1acddc34
      lxsbupt 提交于
      * merge gpugraph to develop, fix code style
      
      * update for untrainable params for stage3. (#48577)
      
      * merge gpugraph to develop, trigger ci
      
      * [CodeStyle][isort][Dy2St] sort imports in test_error (#48746)
      
      * [CodeStyle][isort][Dy2St] sort imports in test_error
      
      * update lineno
      
      * Clear extra input (Bias, ResidualData) in OpMaker of conv2d (#47579)
      
      * delete Bias and ResidualData in OpMaker of conv2d
      
      * delete extra input of conv3d
      
      * refactor pass of conv_bias_fusion
      
      * fix mkldnn dependency
      
      * fix mkldnn compile
      
      * fix test_conv_bias_mkldnn_fuse_pass
      
      * police some code
      
      * remove useless log
      
      * fix analyzer_vit_ocr_tester
      
      * fix conv_activation_mkldnn_fuse_pass
      
      * fix test_analyzer_ocr
      
      * add fused_conv_sig
      
      * fix performence regression
      
      * fix performance regression
      
      * make bilinear interpolate stable. (#48644)
      
      * make bilinear interpolate stable.
      
      * fix code
      
      * clear tmp var in ptq (#48660)
      
      * merge gpugraph to develop, fix py-api comment
      
      * merge gpugraph to develop, fix mac-python3
      
      * merge gpugraph to develop, fix mac-python3
      
      * [Dy2St] replace deprecated `load_module` with `exec_module` (#48679)
      
      * merge gpugraph to develop, fix mac-python3
      
      * modify d2d copy to xpu::copy in xpu kernel, test=kunlun (#48710)
      
      * rm _test_eager_guard (#48767)
      
      * delete sampling_id api (#48543)
      
      * [NPU] add FLAGS_npu_storage_format env to enable npu storage format, test=develop (#48774)
      
      * optimize nchw<->nhwc kernel in fp16 model (#48692)
      
      * fix: oss just support sm>=75 (#48731)
      
      * update kl1 op list and optimize matmul unitest for kunlun (#48775)
      
      *test=kunlun
      
      * Fix accuracy fp16 kernel return fp32 tensor error (#48803)
      
      * [phi::DenseTensor] Replace Tensor with phi::DenseTensor (#48682)
      
      * [Zero-Dim] Support 0D for paddle.diagflat (#48735)
      
      * [Zero-Dim] Support 0D for paddle.diagflat
      
      * 【fluid api clear】Move batch norm1 (#47965)
      
      * modify slice infershape
      
      * code style
      
      * modify slice_unittest
      
      * temp fix
      
      * batch_norm api move
      
      * code_style
      
      * codestyle
      
      * ci_static
      
      * add __init__
      
      * reset other change
      
      * revert .cc
      
      * add import batchnorm
      
      * conflict and revert
      
      * fix bug
      
      * fix third conflict one day
      
      * fix conflict
      
      * fix conflict bug
      
      * fix conflict bug
      
      * modify api
      
      * code_style
      
      * modify doc
      
      * add lost doc stable
      
      * fix conflict bug
      
      * ci lack of gpu
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv (#48654)
      
      * [remove fluid] PRelu BilinearTensorProduct
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * [remove fluid] PRelu BilinearTensorProduct Conv2DTranspose SequenceConv RowConv
      
      * merge gpugraph to develop, rollback graph_send_recv
      
      * fix ci (#48730)
      
      * Remove reduntant numpy output in Example code (1/3), test=document_fix (#48678)
      
      * 修改了英文API文档 (#48219)
      
      * 修改paddle.nn.dynamic_decode,paddle.nn.functional.diag_embed 示例
      
      * mma qk tensor_core (#48087)
      
      * use mma for QK dot computing in fused_multi_transformer.
      * Update fused_multi_transformer_op.cu.h
      
      * remove lrn which is not used in paddle 2.0 (#47945)
      
      * replace scatter_nd and scatter_nd_add with paddle.scatter_nd and (#47960)
      
      paddle.scatter_nd_add
      
      * [PHI] Migrate mul_grad kernel (#48061)
      
      * cleanup unused code
      
      * unify is_int8 is_bfloat16
      
      * Simplify matmul_v2 FWD kernel
      
      * remove RunKernel methods
      
      * remove import namespace
      
      * remove headers
      
      * clean fluid/phi cross imports
      
      * remove fluid axpy_handler
      
      * delete fluid methods
      
      * activations
      
      * OneDNNMemDesc
      
      * MKLDNNFormatForSize
      
      * MatchShapeToLayout
      
      * MKLDNNMemoryFormat
      
      * MKLDNNFormat
      
      * ReorderMKLDNNHandler
      
      * to_void_cast
      
      * review suggestions
      
      * interpolate
      
      * remove fluid depedency
      
      * init
      
      * ExecuteMatMulV2
      
      * rm fluid kernel
      
      * matmul_grad
      
      * remove mutable_data
      
      * mul_grad
      
      * delete unnecessary shape and slice op (#48112)
      
      * 修改英文文档。
      
      * 修改segment operator等英文文档。
      
      * 重新修改了paddle.einsum,paddle.unique_consecutive,
      paddle.disable_signal_handler的英文文档格式。
      
      * 重新修改了英文文档格式。;test=docs_preview
      
      * Update extension.py
      
      * 重新修改了英文文档格式。;test=docs_preview
      
      * 重新修改了英文文档格式。
      待验收:
      - paddle.linalg.svd
      - paddle.nn.functional.diag_embed
      - paddle.set_grad_enabled
      - paddle.disable_signal_handler
      - paddle.cumprod
      - paddle.devaice.cuda.stream_guard
      
      待修改:
      - paddle.nn.dynamic_decode
      - paddle.einsum
      - paddle.unique_consecutive
      - paddle.linalg.svd
      - paddle.uncubate.segment_min
      - paddle.uncubate.segment_max
      - paddle.uncubate.segment_sum
      - paddle.uncubate.segment_mean
      
      ;test=docs_preview
      
      * 重新修改了英文文档格式。
      待验收:
      - paddle.linalg.svd
      - paddle.nn.functional.diag_embed
      - paddle.set_grad_enabled
      - paddle.disable_signal_handler
      - paddle.cumprod
      - paddle.devaice.cuda.stream_guard
      - paddle.nn.dynamic_decode
      - paddle.unique_consecutive
      - paddle.linalg.svd
      
      待修改:
      - paddle.einsum
      - paddle.incubate.segment_min
      - paddle.incubate.segment_max
      - paddle.incubate.segment_sum
      - paddle.incubate.segment_mean
      
      ;test=docs_preview
      
      * 重新修改了英文文档格式。
      待验收:
      - paddle.linalg.svd
      - paddle.nn.functional.diag_embed
      - paddle.set_grad_enabled
      - paddle.disable_signal_handler
      - paddle.cumprod
      - paddle.devaice.cuda.stream_guard
      - paddle.nn.dynamic_decode
      - paddle.unique_consecutive
      - paddle.linalg.svd
      
      待修改:
      - paddle.einsum
      - paddle.incubate.segment_min
      - paddle.incubate.segment_max
      - paddle.incubate.segment_sum
      - paddle.incubate.segment_mean
      
      ;test=docs_preview
      
      * update
      
      * test=docs_preview
      
      * update formula; test=docs_preview
      
      * update formula; test=docs_preview
      
      * remove this operator; test=docs_preview
      
      * add hyper link; test=docs_preview
      
      * add default value; test=docs_preview
      
      * update format; test=docs_preview
      
      * empty commit; test=docs_preview
      
      * fix codestyle issues; test=docs_preview
      
      * empty commit; test=docs_preview
      Co-authored-by: Nlzy <569782149@qq.com>
      Co-authored-by: NVvsmile <450864116@qq.com>
      Co-authored-by: NSławomir Siwek <slawomir.siwek@intel.com>
      Co-authored-by: NRichardWooSJTU <37864677+RichardWooSJTU@users.noreply.github.com>
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      Co-authored-by: NNyakku Shigure <sigure.qaq@gmail.com>
      
      * [PHI] Migrate squeeze and squeeze_grad kernels (#48634)
      
      * squeeze kernel
      
      * squeze fwd
      
      * whitespace
      
      * 修复paddle.nn.functinal包和paddle.nn包下API文档 (#48581)
      
      * assign cve number to pdsa, test=document_fix (#48846)
      
      * [fluid remove]: remove paddle.fluid.layers.yolo_box and paddle.fluid.layers.yolov3_loss (#48722)
      
      * remove paddle.fluid.layers.nn.temporal_shift
      
      * code check
      
      * rm unittest
      
      * remove fluid.yolo_box
      
      * remove fluid.yolov3_loss
      
      * change the comments of yolov3_loss to yolo_loss
      
      * merge gpugraph to develop, fix windows compile
      
      * merge gpugraph to develop, fix windows compile
      
      * merge gpugraph to develop, fix windows compile
      
      * Try add eval() to speedup the eigen performance. (#48855)
      
      * [Fluid Clean]move inplace_apis_indygraph_only from paddle.flud.dygraph.inplace_utils to paddle.utils (#48744)
      
      * move inplace_apis_indygraph_only from paddle.flud.dygraph.inplace_utils to paddle.utils
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify static-check ci error
      
      * fix conflict
      
      * modify failed tests
      
      * fix conflict
      
      * fix conflict
      
      * fix pool2d examples
      
      * modify conflict
      
      * fix failed tests
      
      * fix conflict
      
      * fix failed tests
      
      * modfiy problem of deleting pool2d
      
      * merge gpugraph to develop, fix windows compile
      
      * clean fluid task: transfer gaussian random api (#48529)
      
      * Delete duplicate quant nodes in QAT (#48751)
      
      * rm autograd func dynamic eager tests (#48788)
      
      * Setuptools optimization (#48770)
      
      * optimize setup.py
      
      * modify setup.py
      
      * modify setup.py
      
      * modify setup.py
      
      * modify setup.py after zhangbo reviewed
      
      * [CodeStyle][F811] fix some test cases shadowed by the same name (#48745)
      
      * [CodeStyle][F811] fix some unittests
      
      * fix setup.py
      
      * remove ignore from flake8 config
      
      * remove repeat TestAbsDoubleGradCheck
      
      * fix rrelu test
      
      * fix fft ut
      
      * add noqa in fluid.lstm ut
      
      * add rtol and atol in test_matmul_v2_op
      
      * update rtol
      
      * empty commit
      
      * empty commit
      
      * revert changes in matmul ut and add noqa
      
      * rename test case name
      
      * set free_when_no_cache_hit default value to true (#48815)
      
      * [Clean Fluid] Rm and mv some fluid dygrah apis (#48576)
      
      Remove fluid dygrah apis
      GroupNorm
      TreeConv
      Move fluid dygraph apis
      Flatten
      SpectralNorm
      
      * [Inference] inference add cinn interface (#48741)
      
      * Clean and migrate fluid APIs of paddle.fluid.layers.control_flow (#48233)
      
      * Merge branch 'reduce_sum' of https://github.com/GhostScreaming/Paddle into mine_fluid_clean_common.
      
      * Fix some bugs.
      
      * Clean APIs in python/paddle/fluid/layers/control_flow.py
      
      * Polish code style.
      
      * Change API.
      
      * Fix some bugs.
      
      * Fix some bugs.
      
      * remove gpu_info.h from phi dependencies (#48811)
      
      * [Paddle Inference] Add add onehot trt converter (#48655)
      
      * add onehot trt converter
      
      * add unitest
      
      * fix bug
      
      * opt code
      
      * fix bug
      
      * fix depth_tensor
      
      * fix unitest
      
      * fix bug
      
      * fix unitest
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * [PHI decoupling] remove  bbox_util.h from phi dependencies (#48761)
      
      * remove bbox_util.h from phi
      
      * add file bbox_util.h
      
      * reframe bbox_util.h
      
      * Optimize Paddle diagonal (#47904)
      
      * [API Clean]Clean __all__ to avoid exposing usless API (#48713)
      
      * [API Clean]Clean __all__ to avoid exposing usless API
      
      * fix import
      
      * fix typo
      
      * remove tracedLayer unittest
      
      * Clean fluid APIs in distributed and fleet files (#48851)
      
      * Fix bug of reduce_sum op. When input.numel() > INT32_MAX, its result
      is wrong.
      
      * Remove climits.
      
      * Clean fluid API in paddle/distributed and paddle/fleetx folders.
      Include following files:
      python/paddle/distributed/__init__.py
      python/paddle/distributed/collective.py
      python/paddle/distributed/fleet/utils/fs.py
      python/paddle/distributed/fleet/utils/hybrid_parallel_inference.py
      python/paddle/distributed/fleet/utils/hybrid_parallel_util.py
      python/paddle/distributed/fleet/utils/internal_storage.py
      python/paddle/distributed/launch/context/device.py
      python/paddle/distributed/parallel.py
      python/paddle/distributed/parallel_with_gloo.py
      python/paddle/distributed/spawn.py
      python/paddle/framework/__init__.py
      To be mentioned, 'paddle.fluid.dygraph.parallel.ParallelEnv'
       and 'fluid.framework.core' keeps unchanged in those files.
      ParallelEnv is used by paddle.fluid.dygraph.parallel.DataParallel.
      However, APIs in paddle.fluid.dygraph.parallel can't be
      migrated to paddle.distributed, as there exists cyclic import
      dependencies in modules like paddle.static, paddle.tensor. And
      'fluid.framework.core' will be changed to import framework.core
      after fluid.core is transmitted.
      
      * Change TODO authors.
      
      * rm kunlun xpu2_op_list (#48826)
      
      *test=kunlun
      
      * remove detection_output, iou_similarity and bipartite_match (#48773)
      
      * Set WaiterType of kGpuSync to kCPU (#48758)
      
      * [Migrate Fluid] Migrate Decoder, BeamSearchDecoder (#48754)
      
      * [Inference] Enable infer shape cache. (#48312)
      
      * [Fluid Clean] remove unfold, deformable_roi_pooling, shard_index, hard_swish, mish, uniform_random, unbind (#48451)
      
      * fix-gpups setup.py (#48888)
      
      * fix-gpups
      
      * test=document_fix
      
      * [PHI decoupling] move cuda_graph from fluid to phi (#48686)
      
      * move cuda_graph from fluid to phi
      
      * move device_memory_aligment from fluid to phi
      
      * Revert "move device_memory_aligment from fluid to phi"
      
      This reverts commit b92fcd39a0a50fdac13278f49be0237a85f3a13f.
      
      * update xpu cmake
      
      * fix english docs typo errors (#48599)
      
      * fix english docs typo errors
      
      the errors in docs as same as chinese pr 5468
      
      * update docs; test=docs_preview
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      
      * [XPU] add load op into oplist. (#48860)
      
      * [XPU] add load op into oplist.
      
      * remove test_sampling_id_op_xpu.py
      
      * 【fluid clean】remove fluid.dygraph.rnn.lstmcell and fluid.dygraph.rnn.grucell (#48719)
      
      * refine bsd doc (#48882)
      
      * [Paddle Inference] General optimization for no_varlen embedding layernorm (#48580)
      
      * general optimization no_varlen embedding layernorm
      
      * fix tmp directories (#48863)
      
      * rm dygraph_to_static eager guard tests part2 minst2ptb_lm (#48793)
      
      * rm dygraph_to_static eager guard tests part2 minst2ptb_lm
      
      * merge gpugraph to develop, fix the_one_ps.py for gpups
      
      * [remove fluid] under unittesets of linear api (#48564)
      
      * [remove fluid] under unittesets of linear api
      
      * [remove fluid] under unittesets of linear api
      
      * [remove fluid] under unittesets of linear api
      
      * [remove fluid] under unittesets of linear api
      
      * [remove fluid] under unittesets of linear api
      
      * [remove fluid] under unittesets of linear api
      
      * [remove fluid] fluid dygrapn linear api
      
      * [remove fluid] fluid dygrapn linear api
      
      * [remove fluid] fluid dygrapn linear api
      
      * [remove fluid.layers.cross_entropy] remove unit tests (part 1) (#48726)
      
      * replace layers.cross_entropy with paddle.entropy
      
      * fix args
      
      * fix codestyle
      
      * proper fix (#48360)
      
      Reenabled ext_reorder recording for TransDataLayoutFromOneDNN
      
      * [remove fluid.layers.matmul] remove fluid.layers.matmul in example code (#48818)
      
      * replace fluid.layers.matmul in fluid/io.py
      
      * fix doc error in fluid.layers.nn.sampling_id
      
      * remove test_auto_search_dist_matmul_op.py (#48794)
      
      * delete mean api (#48764)
      
      * clean test_op_name_conflict (#48704)
      
      * opt kernel_selection error msg (#48864)
      
      * rewrite delete_weight_dequant_linear_op_encoder/decoder pass (#48650)
      
      * rewrite delete_weight_deqquant_linear_op_encoder/decoder pass
      
      * [XPU] add set_value and set_value_grad (#48845)
      
      * merge gpugraph to develop, fix gpups ut
      
      * Add QuantizedMatmul in QAT (#47997)
      
      * fix 'BlasAXPBY unimplemented' error with custom device (#48762)
      
      * fix 'BlasAXPBY unimplemented' error with custom device
      
      * fix utils CmakeLists bug
      
      * first commit (#38143)
      
      * [Auto Parallel] Add cluster partition and dm to pm (#48320)
      
      * add cluster_partition and device_meshes to process_meshes funcs
      
      * add unitest
      
      * fix paddle2cinn float16 type support bug (#48249)
      
      * remove pool2d from fluid (#48512)
      
      * remove pool2d
      
      * [fluid remove]: remove paddle.fluid.layers.detection_map, paddle.fluid.metrics.DetectionMAP and paddle.fluid.evaluator.DetectionMAP (#48674)
      
      * remove paddle.fluid.layers.nn.temporal_shift
      
      * code check
      
      * rm unittest
      
      * remove paddle.fluid.layers.detection_map and the class:DetectionMAP
      
      * [PHI decoupling] move "flags.h" from fluid to phi (#48696)
      
      * add set_lr & get_lr for stage2 optimizer. (#48857)
      
      * move share_buffer kernel to phi (#48858)
      
      * move share_buffer kernel to phi
      
      * fix ut
      
      * add source file
      
      * fix window links
      
      * [Kernel Selection] Simplify kernel selection process in phi, reduce search number to half (#47771)
      
      * simplify SelectKernelOrThrowError function in phi
      
      * opt kernel_selection process
      
      * polish code, fix backend error
      
      * Support static graph code-gen for scalar and int_array (#48792)
      
      * add suppport_tensor for code_gen to static graph
      
      * support code-gen for int_array
      
      * polish code
      
      * fix bug of data_type
      
      * clean unittest test_model_cast_to_bf16 (#48705)
      
      * rm dy2static eager tests part1 bert2loop (#48790)
      
      * rm dygraph_to_static eager guard tests part3 reinforce2yolo (#48795)
      
      * rm distribution uniform eager guard test (#48768)
      
      * rm distribution uniform eager guard test
      
      * review
      
      * replace cross_entropy in python/paddle/fluid/tests/unittests/test_[a-n]*.py except test_dist_transpiler.py (#48913)
      
      * replace cross_entropy except in python/paddle/fluid/tests/unittests/*.py && unittests/*/*.py (#48922)
      
      * [Paddle Inference]add cutlass act set in conv_elementwise_add_act_fuse_pass (#48838)
      
      * add cutlass act set in conv_elementwise_add_act_fuse_pass
      
      * move fluid.layers.create_global_var to static.create_global_var (#48777)
      
      * Modified the Kernel policy. When the compute is NHWC (#48563)
      
      * temporally disable set_value (#48942)
      
      * xpu support inplace flatten (#48909)
      
      This is a PR to catch up with latest xpu white list strategy
      (https://github.com/PaddlePaddle/Paddle/pull/48606)
      , since original list only include 'fluid' fashion names, but new list
      must include 'phi' fashion as well.
      Refer to paddle/phi/core/kernel_factory.cc for more details.
      
      * fix:vit_attention ut (#48884)
      
      * mv fused_bias_dropout_residual_ln to fluid manual dir (#48824)
      
      * mv fused_bias_dropout_residual_ln to fluid manual dir
      
      * rm useless comments
      
      * bug fix (#48829)
      
      * move ops_extra_info_gen.py from phi to fluid (#48926)
      
      * fix scale type in alpha and beta (#48887)
      
      * [inference][trt] upgrade prelu op  (#48528)
      
      * add prelu
      
      * 对多个文档按照要求修改 对应中文的#5453 (#48886)
      
      * fix doc
      
      * test=document_fix
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      
      * replace cross_entropy in python/paddle/fluid/tests/unittests/*.py except test*.py (#48919)
      
      * [remove fluid] Remove fluid APIs (#48641)
      
      * [CodeStyle] fix renamed files not being monitored by Codestyle Check (#48892)
      
      * [fluid remove]: remove paddle.fluid.layers.box_coder and paddle.fluid.layers.polygon_box_transform (#48896)
      
      * remove fluid_box_coder and polygon_box_transform
      
      * code check
      
      * [Custom XPU Support] Custom extension support xpu backend (#48733)
      
      * support custom_xpu
      
      * update cmake to test xpu
      
      * support custom_xpu, verify mechanism
      
      * fix test_custom_relu_op_xpu_setup.py, test=kunlun
      
      * fix FLAGS_init_allocated_mem
      
      * cancel TIMEOUT property
      
      * reset FLAGS_init_allocated_mem property
      
      * rm mlu ops eager guard tests (#48769)
      
      * rm npu instance_np op for eager guard tests (#48785)
      
      * remove xpu eager guard tests (#48786)
      
      * [remove fluid.layers.cross_entropy] remove unit tests (part 3)  (#48918)
      
      * replace cross_entropy in python/paddle/fluid/tests/unittests/test_[o-z]*.py plus test_dist_transpiler.py
      
      * fix test_prune
      
      * [Inference] optimize some code and fix some bug (#48780)
      
      * clean ir_pass_manager and fix map_depthwise_conv_to_conv_pass
      
      * fix unitest timeout
      
      * [PHI] Migrate reshape kernel (#48749)
      
      * reshape
      
      * typo
      
      * remove header
      
      * support py3 in setup.py (#48905)
      
      * support py3 in setup.py
      
      * support setup.py bdist_wheel in py3
      
      * support py3 in setup.py
      
      * modify run_setup
      
      * [Paddle-TRT] add cast between  int64 tensor  and Paddle-TRT (#45547)
      
      * Add cast between int64 tensor and Paddle-TRT
      * Add Unit testing.
      
      * fix sharding_stage1 amp O2 decorate bug (#48960)
      
      * [remove fluid] fluid dygraph Embedding (#48806)
      
      * [remove fluid] fluid dygraph Embedding
      
      * [remove fluid] fluid dygraph Embedding
      
      * [remove fluid] fluid dygraph Embedding
      
      * [remove fluid] fluid dygraph Embedding
      
      * [remove fluid] fluid dygraph Embedding
      
      * [remove fluid] fluid dygraph Embedding
      
      * fix for mkldnn (#48852)
      
      * H2D data transfer optimization with usage of structure type for stack kernel (#48899)
      
      * first commit.
      
      * refine performance with fast_divmod
      
      * refine performance with fast_divmod
      
      * rm accuracy and auc in extra __all__ (#48986)
      
      * Add dynamic checks for collective communication on NCCL  (#48915)
      
      * chore: unify `SingleTensor`
      
      * feat: dynamic check
      
      * support sharding in fp16 on xpu,  (#48897)
      
      * support sharding in fp16 on xpu, change reduce_max to reduce_sum for found nan or inf
      
      * update
      
      * Support cross-step stream synchronization for standalone executor (#48809)
      
      * Add UT
      
      * Support cross-step stream synchronization for standalone executor
      
      * Fix typos
      
      * Fix typos
      
      * Update UTs
      
      * Generate static graph code of some ops by yaml (#48771)
      
      * generate static graph code of some ops by yaml, test = develop
      
      * fix 'take_along_axis' yaml style
      
      * reset scatter/scatter_nd_add
      
      * delete the comments of put_along_axis
      
      * fix a bug in GetTrtWeight (#48993)
      
      * add static_ops.yaml for static op (#48991)
      
      * [PHI decoupling] move norm_utils.cu.h from fluid to phi and remove norm_utils.h in fluid (#48930)
      
      * move norm_utils.cu.h from fluid to phi
      
      * remove norm_utils.h in fluid
      
      * fix bugs and replace mutable_data with Alloc
      
      * replace mutable_data with Alloc
      
      * forbid conv op whose weight is not a persistable weight into Paddle-TRT (#48763)
      
      * fix: Move the pass location to the appropriate location (#48951)
      
      * Enhance check_nan_inf implementation for CPU. (#48591)
      
      * Enable to print device info.
      
      * Enhance the nan and inf checking for cpu.
      
      * Implement a common print function.
      
      * Unify the check of complex numbers.
      
      * Rewrite the omp method.
      
      * Count and print the number of nan and inf.
      
      * Change the print content.
      
      * Add unittest.
      
      * [PHI] OneDNN version of Copy (#48539)
      
      * OneDNN version of Copy, tranpose kernels adjusted
      
      * style fixes in tranpose_grad
      
      * redundant headers deleted
      
      * fix: there are some bugs with trt 8.0 (#48921)
      
      * fix: there are some bugs with trt 8.0
      
      * fix:windows CI trt is too old
      
      * Optimization of Eigh op with ssyevj_batched runtime api (#48560)
      
      * fix codestyle
      
      * add double complex<float> complex<double> dtype support for syevj_batched
      
      * fix use_syevj flag for precision loss when input dtype of syevj_batch is complex128 in some case
      
      * optimize eigh in different case
      
      * fix missing ; bug
      
      * fix use_syevj bug
      
      * fix use_cusolver_syevj_batched flag
      
      * replace cross_entropy in python/paddle/fluid/tests/unittests/*/*.py except unittests/*.py (#48920)
      
      * [PHI decoupling] replace dependency of inclusive_scan.h from phi (#48980)
      
      * replace dependency of inclusive_scan.h from phi
      
      * format code
      
      * fluid API magration : Assert, increment, cond (#48885)
      
      * [Clean fluid] Add inner function _elementwise_op_with_axis (#48748)
      
      * add inner function _elementwise_op_with_axis
      
      * fix transformer_model
      
      * polish API code
      
      * remove elementwise_div/mul api
      
      * delete API in __all__
      
      * delete elementwise_mul completely
      
      * polish elementwise_mul call
      
      * polish internal api
      
      * resolve conflict, fix rnn.py
      
      * use non-inplace call
      
      * delete elementwise_mul api test
      
      * delete elementwise_mul api test
      
      * clean elementwise_add/sub
      
      * restore _elementwise_op_in_dygraph in nn.py
      
      * test_convert_to_mixed_precision.py use tempfile for temporary models/params (#48819)
      
      * Tighten the Interception strategy (#48947)
      
      * test approve ,test=document_fix
      
      * test approve ,test=document_fix
      
      * test approve ,test=document_fix
      
      * [CodeStyle][isort][F401] fix some regression issues (#48936)
      
      * [CodeStyle][isort][F401] fix some regression issues
      
      * add import paddle to fix eval call
      
      * rm multinode eager guard tests (#48766)
      
      * rm multinode eager guard tests
      
      * remove unwanted tests
      
      * reset process_mpi test
      
      * rm unittests eager guard tests part5 dataloader2dygraph_mnist (#48816)
      
      * [PHI]Add new Tensor type and migrate save_combine kernel (#47856)
      
      * add new tensor
      
      * fix windows compile bugs
      
      * fix ci bugs
      
      * fix ci bugs
      
      * fix ci bugs
      
      * perfect according comment
      
      * fix ci compile bugs
      
      * add raw tensor
      
      * fix ci bugs
      
      * modify code by comment
      
      * delete String
      
      * [Fluid Clean]move BatchNorm from flud.dygraph.nn to paddle.nn.layer.norm (#48734)
      
      * move BatchNorm from flud.dygraph.nn to paddle.nn.layer.norm
      
      * modfiy conflict
      
      * modify pre-commit error
      
      * modify static-check ci error
      
      * fix failed tests
      
      * modify conflict
      
      * modify conflict
      
      * delete import modelu GRUUnit
      
      * fix falied test
      
      * fix failed testes
      
      * fix failed tests
      
      * fix failed tests
      
      * fix failed test
      
      * fix error in test_fused_resenet_basic_block_op_xpu.py
      
      * modify after xiaoguang reviewed
      
      * [Setup] Ignore @PADDLE_BINARY_DIR@ files (#49002)
      
      * [Setup] Ignore @PADDLE_BINARY_DIR@ files
      
      * test=document_fix
      
      * reshape onednn test reimplemented (#48850)
      
      * - UT reshape onednn
      
      - Fix
      
      test
      
      test2
      
      - test4
      
      - test5
      
      - test6
      
      test7
      
      - test8
      
      - Ut reinvented
      
      - cosmetic
      
      * - fix
      
      * - fix
      
      * - fix
      
      * - fix
      
      * - Fix
      
      * - fix
      
      * - fix
      
      * - fix
      
      * - Fix
      
      * lint
      
      * update fused_multi_transformer_encoder_pass support GPT new matmul API (#48953)
      
      * fit paddle.matmul in fleetx.gpt
      
      * Revert "set free_when_no_cache_hit default value to true (#48815)" (#48968)
      
      This reverts commit 592ed40b.
      
      * [Paddle Inference]fix some transformer unitest (#48929)
      
      * fix some transformer unitest
      
      * Enable Generic-Plugin support FP16 (#48807)
      
      * support conv1d quant & skip calibrate zero-size tensor (#48912)
      
      * enable custom device save model on device memory && fix conflict (#48221)
      
      * [api move] cvm (#48989)
      
      * [api move] cvm
      
      * [api move] cvm
      
      * [api move] cvm
      
      * [api move] cvm
      
      * [api move] cvm
      
      * [api move] cvm
      
      * [api move] cvm
      
      * [api move] ci test
      
      * [api move] ci test
      
      * [api move] ci test
      
      * Bugfix: xpu now only support single node multi-card, bkcl_comm_num should always set to 1 (#48961)
      
      * rm unittests eager guard tests part23 where2zeros (#48895)
      
      * rm unittests eager guard tests part17 number2pool1d (#48840)
      
      * [NPU] fix FLAGS_npu_storage_format flag in python, test=develop (#48976)
      
      * remove fleet eager guard tests (#48765)
      
      * rm unittests eager guard tests part6 eager_run2expand_v2 (#48817)
      
      * rm unittests eager guard tests part12 imperative_optimizer2resnet (#48833)
      
      * [fluid clean] remove 4 fluid.layers api and imigrate 2 fluid.layer api (#48972)
      
      * fluid clean layer
      
      * docs
      
      * remove reset reference in unittest for `fluid.layers.cross_entropy` (#49012)
      
      * replace cross_entropy in test*.py except python/paddle/fluid/tests/unittests/*.py (#48978)
      
      * remove linear_chain_crf and crf_decoding from fluid (#48996)
      
      * remove linear_chain_crf and crf_decoding
      
      * Generate static graph code of some ops by yaml (#48977)
      
      * generate static graph code of some ops by yaml
      
      * fix the code-style of yaml
      
      * fix the framework_ci for triangular_solve
      
      * change the 'data_type' of scatter
      
      * add the 'out: Out' of scatter_nd_add
      
      * [tools] Update summary env (#48627)
      
      * [tools] remove deprecated api , fix macOS get version error
      
      * [tools] Rename the value that returns null
      
      * [tools] add gcc, clang, cmak, libc version
      
      * [tools] fix cudnn read error
      
      * [tools] add gpu devices list, drive based
      
      * [issue] update 3_build-installation-issue.yml
      
      * [tools] fix get gpu list AttributeError
      
      * [Dy2St] transforms.RandomVerticalFlip Support static mode (#49024)
      
      * add static RandomVerticalFlip
      
      * object => unittest.TestCase
      
      * Save fused_attention op memory when dropout_rate = 0.0 (#48902)
      
      * save fused_attention memory when dropout_rate = 0.0
      
      * add ut
      
      * fix ut bug
      
      * fix fused_layernorm_residual_dropout_bias_test.cu
      
      * Correct multiple inputs and outputs (#48872)
      
      * [CodeStyle][isort][Dy2St] sort imports for paddle.jit (#48637)
      
      * isort jit
      
      * refine comment
      
      * remove non-public apis from __all__ (#48952)
      
      * remove non-public apis from __all__
      
      * fix code style
      
      * fix rmsprop_ yaml bug (#49026)
      
      * fix rmsprop_ yaml bug
      
      * Fixed the dead link bug in the API documentation (#48969)
      
      * first pr
      
      * Revise nn.py
      
      * Revise nn.py 2.0
      
      * Revise rnn.py;test=document_fix
      
      * test=document_fix
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      
      * Change mutable_data to ctx.Alloc. (#49001)
      
      * [inference][trt] add more unary op and square (#48534)
      
      * add more unary op and square
      
      * Support ninja (#48932)
      
      * move inplace_apis_indygraph_only from paddle.flud.dygraph.inplace_utils to paddle.utils
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify conflict
      
      * modify static-check ci error
      
      * fix conflict
      
      * modify failed tests
      
      * fix conflict
      
      * fix conflict
      
      * fix pool2d examples
      
      * modify conflict
      
      * fix failed tests
      
      * fix conflict
      
      * fix failed tests
      
      * modfiy problem of deleting pool2d
      
      * support Ninja in setup.py
      
      * support different cmake_generators
      
      * modify after reviewed
      
      * delete unused denotes
      
      * Deleted mkldnn_inplace_pass code (#47818)
      
      * Deleted mkldnn_inplace_pass code
      
      * Fixed error with cmake
      
      * Resolve conflicts
      
      * hide log (#49045)
      
      * test=doucment_fix
      
      * test=document_fix
      
      * [Sparse]Optimize performance of sparse conv on T4 (#49009)
      
      * modify cmake file for cuda11.8 compile (#49020)
      
      * modify cmake file for cuda11.8 compile
      
      * add op_library(fused_embedding_eltwise_layernorm_op DEPS bert_encoder_functor)
      
      * remove dropout from fluid (#48319)
      
      * remove dropout
      
      * nullptr bugfix for XPU pg mode (#49043)
      
      * nullptr bugfix for XPU pg mode
      
      Also a few kernels is added to xpu whitelist
      
      * increase error msg length
      
      * Divide elementwise case from BroadcastKernel and refine transpose autotune (#33051)
      
      * First Commit.
      
      * add some codes
      
      * add elementwise loader
      
      * fix code styles
      
      * merge with develop
      
      * add some changes both in elementwise and transpose
      
      * add init operation in broadcast kernel.
      
      * change codes according to pr suggestions about transpose file
      
      * fix error for op-benchmark ci
      
      * fix according to ci
      
      * add condition of skipif (#48791)
      
      * add condition of skipif
      
      * fix code format error
      
      * Update test_fused_gate_attention_op.py
      
      update
      
      * rm unittests eager guard tests part9 histogram2imperative_dataloader (#48825)
      
      * rm unittests eager guard tests part9 histogram2imperative_dataloader
      
      * rm basic
      
      * rm unittests eager guard test part14 initializer2layer_norm (#48835)
      
      * rm unittests eager guard test part14 initializer2layer_norm
      
      * monior change
      
      * [Bugfix] recompute dep filter param (#49010)
      
      * recompute dep filter param
      
      * recompute dep for reshard
      
      * [Paddle Inference] rewrite convert_to_mixed_precision (#48853)
      
      * [CodeStyle] fix c++17-extensions warning on macos (#49017)
      
      * fix c++17-extensions warning on macos
      
      * fix type
      
      fix c++17-extensions warning on macos
      
      fix c++17-extensions warning on macos
      
      * Add custom CUDNN finding paths for 64bit Windows (#49066)
      
      * remove prior_box (#49006)
      
      * remove prior_box
      
      * modify the sequence of paras of prior_box in multi_box_head api
      
      * InstanceNorm1D、InstanceNorm2D、InstanceNorm3D (#48940)
      
      * modified:   python/paddle/nn/layer/norm.py
      
      * modified:   python/paddle/nn/layer/norm.py
      
      * modified:   python/paddle/nn/layer/norm.py
      
      * modified:   python/paddle/nn/layer/norm.py
      
      * modified:   python/paddle/nn/layer/norm.py
      
      * modified:   python/paddle/nn/layer/norm.py
      
      * test=docs_preview
      
      * InstanceNorm2D中文档格式修改
      
      * test=docs_preview
      
      * modified:   python/paddle/nn/functional/loss.py
      	modified:   python/paddle/nn/functional/norm.py
      	modified:   python/paddle/nn/layer/loss.py
      	modified:   python/paddle/nn/layer/norm.py
      
      * test=docs_preview
      
      * test=docs_preview
      
      * [AutoParallel] recompute tuning (#48608)
      
      * [AutoParallel] recompute tuning
      
      * fix conflict
      
      * update comment
      
      * bug fix
      
      * update rc algo
      
      * tiny fix
      
      * fix clear process_group
      
      * remove comment
      
      * update segment print
      
      * fix import OpRole
      
      * adapt amp pass and grad_clip pass for opt_tuner
      
      * update tuning config
      
      * fix import
      
      * annotate recompute info on ops and upgrade recompute pass
      
      * add op_namescope for seed op
      
      * record reserved vars
      
      * fix recompute var's dist_attr
      
      * fix strategy unittest
      
      * adapt for fp16
      
      * update unittest
      
      * revert copy opt
      
      * update unittest
      
      * rename set_recompute_segments
      
      * fix unittest
      
      * fluid API magration : array_read, array_write (#49022)
      
      * del array_write & array_read
      
      * fix import err
      
      * fix import err
      
      * fix example codes
      
      * Keep double-buffer reader for static mode  (#49068)
      
      * Fix nullptr to TestFuseGemmEpilogueReluBWDFP* (#48997)
      
      * support fp16 index sample (#47897)
      
      * add index sample fp16 support
      
      * remove fluid APIs in distributed_strategy.py and role_maker.py
      
      * Revert "remove fluid APIs in distributed_strategy.py and role_maker.py"
      
      This reverts commit 223bbee990d3bf69e252fc3c0f19e3873550a264.
      
      * fix instantiated more than once
      
      * clean codes
      
      * rm unittest eager guard tests part20 sparse_mv2split (#48879)
      
      * rm unittests eager guard tests part11 imperative_layer2ocr (#48828)
      
      * rm unittests eager guard tests part11 imperative_layer2ocr
      
      * review
      
      * rm eager guard tests part3_1 (#49059)
      
      * fix: gloo compatible (#49084)
      
      * rm eager guard tests part3_3 (#49061)
      
      * fix bug (#49081)
      
      * [Inference] memory_optimize and mkdlnn  problem (#49054)
      
      * memory_optimize and mkdlnn problem
      
      * update
      
      * update
      
      * update
      
      * Remove/move 16 fluid APIs (#48377)
      
      * remove density_prior_box
      
      * remove anchor_generator
      
      * remove roi_perspective_transform
      
      * remove generate_proposal_labels
      
      * remove generate_mask_labels
      
      * remove generate_proposals
      
      * remove box_clip
      
      * remove retinanet_detection_output
      
      * remove multiclass_nms
      
      * remove locality_aware_nms
      
      * remove matrix_nms
      
      * remove distribute_fpn_proposals
      
      * remove box_decoder_and_assign
      
      * remove collect_fpn_proposals
      
      * remove 2 trt files
      
      * move prior_box to static/nn/common.py
      
      * move multi_box_head to static/nn/common.py
      
      * fix for CI/CE
      
      * remove retinanet_detection_output
      
      * restore compile_vs_runtime_white_list.py
      
      * restore test_retinanet_detection_output to white list
      
      * replace nn.flatten by paddle.flatten, and fix doc for retinanet_target_assign
      
      * add enable_static in demo and fix bug
      
      * remove roi_perspective_transform in test_layers
      
      * remove multi_box_head
      
      * change self.multiclass_nms to _legacy_C_ops.multiclass_nms
      
      * empty commit
      
      * empty commit
      
      * check code style
      
      * fix prior_box
      
      * fix CI
      
      * remove redundant prior_box in detection.py
      
      * fix docs
      
      * remove detection
      
      * fix prior_box en doc
      
      * delete prior_box in common
      
      * remote proir_box from __init__.py
      
      * fix embedding multihead (#49085)
      
      * SetDeviceId in StreamSafeCUDAAllocation (#49080)
      
      * [PHI decoupling] Remove fluid imports from MKLDNN code (#48981)
      
      * fix wrong handler name
      
      * mkldnn_engine -> onednn_engine
      
      * remove fluid/errors.h imports
      
      * remove fluid/enforce.h imports
      
      * remove note and unnecessary import
      
      * remove fluid/pretty_log.h imports
      
      * remove fluid/place.h imports
      
      * remove fluid/data_layout_transform.h imports
      
      * remove fluid/device_context.h imports
      
      * remove mkldnn_helper code
      
      * remove fluid/mkldnn_reuse.h imports
      
      * pretty_log import
      
      * replace cross_entropy in python/paddle/fluid/tests/unittests/*.py (#48975)
      
      * 修复paddle.amp.decorate等API的文档 (#48983)
      
      * 涉及到的api有
      paddle.amp.decorate
      paddle.static.npu_places
      paddle.signal.istft
      paddle.signal.stft
      paddle.linalg.eigvalsh
      paddle.randint_like
      
      * change signal.stft
      
      * randint_like的low增加optional
      
      * ; test=docs_preview
      
      * 修改了注解格式; test=docs_preview
      
      * 修改了公式格式
      
      * 修改了decorate的models等
      
      * test=document_fix
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      
      * 按在线文档需求 61~70 更新了部分文档 (#49014)
      
      * Update docstring:
      1. 去除 python/paddle/tensor/manipulation.py 中 cast 函数描述中的 This OP;
      2. 调整 python/paddle/fluid/layers/control_flow.py 中 Print 函数中参数描述的顺序,添加 optional 描述;
      3. 为 python/paddle/tensor/logic.py 中 logical_and 函数添加 optional 描述;
      4. 为 python/paddle/fluid/reader.py 中 DataLoader 类中 from_generator、from_dataset 函数添加 optional 描述;
      5. 在 python/paddle/fluid/layers/nn.py 中 crf_decoding 函数的 param_attr 在使用中确实可视为存在默认值 None,故添加 optional 描述;
      6. 修复 python/paddle/static/nn/common.py 中 data_norm 函数描述里 tex 语法错误的问题,并一并修复同一文件中的相同问题。
      
      * 根据 review 意见修改部分内容。
      
      * 将谓语动词去掉第三人称单数形式。
      
      * 同步中文文档变更。
      
      * string-->str; test=document_fix
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      
      * merge gpugraph to develop, fix gloo wrapper
      
      * merge gpugraph to develop, fix ci
      
      * merge gpugraph to develop, fix gloo wrapper
      
      * merge gpugraph to develop, fix ci
      
      * merge gpugraph to develop, fix fleet.py
      
      * merge gpugraph to develop, fix merge error
      
      * merge gpugraph to develop, fix merge error
      
      * merge gpugraph to develop, add python ut
      
      * merge gpugraph to develop, fix code style
      
      * merge gpugraph to develop, add c++ ut
      
      * merge gpugraph to develop, fix code style
      
      * merge gpugraph to develop, fix data_feed.h
      
      * merge gpugraph to develop, fix code style
      
      * merge gpugraph to develop, fix code style
      
      * merge gpugraph to develop, fix code style
      
      * merge gpugraph to develop, fix code style
      Co-authored-by: Nwuhuachaocoding <77733235+wuhuachaocoding@users.noreply.github.com>
      Co-authored-by: NNyakku Shigure <sigure.qaq@gmail.com>
      Co-authored-by: Nzyfncg <zhangyunfei07@baidu.com>
      Co-authored-by: Nxiongkun <xiongkun03@baidu.com>
      Co-authored-by: Nceci3 <ceci3@users.noreply.github.com>
      Co-authored-by: Nzhangyikun02 <48021248+zhangyk0314@users.noreply.github.com>
      Co-authored-by: NWeilong Wu <veyron_wu@163.com>
      Co-authored-by: N201716010711 <87008376+201716010711@users.noreply.github.com>
      Co-authored-by: NQi Li <qili93@qq.com>
      Co-authored-by: Nzhoutianzi666 <39978853+zhoutianzi666@users.noreply.github.com>
      Co-authored-by: Nfeng_shuai <fengshuai03@baidu.com>
      Co-authored-by: NQingshuChen <chenqingshu@baidu.com>
      Co-authored-by: NWangZhen <23097963+0x45f@users.noreply.github.com>
      Co-authored-by: N张春乔 <83450930+Liyulingyue@users.noreply.github.com>
      Co-authored-by: N傅剑寒 <Xs1580802568@gmail.com>
      Co-authored-by: Nxiaoguoguo626807 <100397923+xiaoguoguo626807@users.noreply.github.com>
      Co-authored-by: Nwangzhen38 <41941775+wangzhen38@users.noreply.github.com>
      Co-authored-by: zhouweiwei2014's avatarZhou Wei <1183042833@qq.com>
      Co-authored-by: NKevin吴嘉文 <417333277@qq.com>
      Co-authored-by: NZman <35071129+Atlantisming@users.noreply.github.com>
      Co-authored-by: Nlzy <569782149@qq.com>
      Co-authored-by: NVvsmile <450864116@qq.com>
      Co-authored-by: NSławomir Siwek <slawomir.siwek@intel.com>
      Co-authored-by: NRichardWooSJTU <37864677+RichardWooSJTU@users.noreply.github.com>
      Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
      Co-authored-by: Nhjyp <53164956+Tomoko-hjf@users.noreply.github.com>
      Co-authored-by: NVigi Zhang <VigiZhang@users.noreply.github.com>
      Co-authored-by: Nzqw_1997 <118182234+zhengqiwen1997@users.noreply.github.com>
      Co-authored-by: NYiqun Liu <Xreki@users.noreply.github.com>
      Co-authored-by: Nrisemeup1 <62429225+risemeup1@users.noreply.github.com>
      Co-authored-by: NGuanghua Yu <742925032@qq.com>
      Co-authored-by: N姜永久 <34344716+yjjiang11@users.noreply.github.com>
      Co-authored-by: Nwanghuancoder <wanghuan29@baidu.com>
      Co-authored-by: NRoc <30228238+sljlp@users.noreply.github.com>
      Co-authored-by: NWilber <jiweibo@baidu.com>
      Co-authored-by: NGhost Screaming <mofengshenjieII@163.com>
      Co-authored-by: NNetpunk <69072522+Patrick-Star125@users.noreply.github.com>
      Co-authored-by: N六个骨头 <46243324+zrr1999@users.noreply.github.com>
      Co-authored-by: NAurelius84 <zhangliujie@baidu.com>
      Co-authored-by: NRuibiao Chen <chenruibiao@baidu.com>
      Co-authored-by: Nliu zhengxi <380185688@qq.com>
      Co-authored-by: Nheyanru <81976792+heyanru01@users.noreply.github.com>
      Co-authored-by: Ntianshuo78520a <707759223@qq.com>
      Co-authored-by: Nhuangjiyi <43315610+huangjiyi@users.noreply.github.com>
      Co-authored-by: NInfinity_lee <luhputu0815@gmail.com>
      Co-authored-by: Nhouj04 <35131887+houj04@users.noreply.github.com>
      Co-authored-by: Nlugimzzz <63761690+lugimzzz@users.noreply.github.com>
      Co-authored-by: NWangzheee <634486483@qq.com>
      Co-authored-by: Nsneaxiy <32832641+sneaxiy@users.noreply.github.com>
      Co-authored-by: Nkangguangli <kangguangli@hotmail.com>
      Co-authored-by: Njakpiase <jakpia21@gmail.com>
      Co-authored-by: NHongyuJia <jiahongyu@baidu.com>
      Co-authored-by: Nhaosicheng <47998305+HarperCy@users.noreply.github.com>
      Co-authored-by: NChang Xu <molixu7@gmail.com>
      Co-authored-by: NKai Song <50285351+USTCKAY@users.noreply.github.com>
      Co-authored-by: Nlimingshu <61349199+JamesLim-sy@users.noreply.github.com>
      Co-authored-by: NJianghai <72591262+CjhHa1@users.noreply.github.com>
      Co-authored-by: Njiangcheng <thisjiang@qq.com>
      Co-authored-by: Nccrrong <101700995+ccrrong@users.noreply.github.com>
      Co-authored-by: NPuQing <me@puqing.work>
      Co-authored-by: NLeo Chen <chenqiuliang@baidu.com>
      Co-authored-by: Ncyber-pioneer <116002591+cyber-pioneer@users.noreply.github.com>
      Co-authored-by: Nniuliling123 <51102941+niuliling123@users.noreply.github.com>
      Co-authored-by: Njames <zhangxiaoci@baidu.com>
      Co-authored-by: Nwenbin <wang3323032@qq.com>
      Co-authored-by: MarDino's avatarZZK <359521840@qq.com>
      Co-authored-by: NZhang Jun <ewalker@live.cn>
      Co-authored-by: Nyjphhw <43883055+yjphhw@users.noreply.github.com>
      Co-authored-by: NYuanle Liu <yuanlehome@163.com>
      Co-authored-by: NWen Sun <35923278+HermitSun@users.noreply.github.com>
      Co-authored-by: HappyHeavyRain's avatarlzydev <1528794076@qq.com>
      Co-authored-by: NPaulina Gacek <paulina.gacek@intel.com>
      Co-authored-by: Nfeifei-111 <2364819892@qq.com>
      Co-authored-by: NYuanRisheng <yuanrisheng@baidu.com>
      Co-authored-by: NJacek Czaja <jacek.czaja@intel.com>
      Co-authored-by: Nweishengying <63448337+weishengying@users.noreply.github.com>
      Co-authored-by: Nengineer1109 <jialiang.wang@xdxct.com>
      Co-authored-by: Ngouzil <66515297+gouzil@users.noreply.github.com>
      Co-authored-by: NRyan <44900829+DrRyanHuang@users.noreply.github.com>
      Co-authored-by: Njoanna.wozna.intel <joanna.wozna@intel.com>
      Co-authored-by: NJYChen <zoooo0820@qq.com>
      Co-authored-by: Njjyaoao <88936287+jjyaoao@users.noreply.github.com>
      Co-authored-by: NHulek <jakub.hulek@intel.com>
      Co-authored-by: Nzhangkaihuo <zhangkaihuo@baidu.com>
      Co-authored-by: NYUNSHEN XIE <1084314248@qq.com>
      Co-authored-by: NJZ-LIANG <jianzhongliang10@gmail.com>
      Co-authored-by: NTinson Lai <laitingsheng@hotmail.com>
      Co-authored-by: NAyuan <79981115+Ayuan2021@users.noreply.github.com>
      Co-authored-by: Nzhaoyingli <86812880+zhaoyinglia@users.noreply.github.com>
      Co-authored-by: NMing-Xu Huang <mingh@nvidia.com>
      Co-authored-by: Nwangxiaoning <71813629+wangxn12138@users.noreply.github.com>
      Co-authored-by: NHaohongxiang <86215757+haohongxiang@users.noreply.github.com>
      Co-authored-by: NHydrogenSulfate <490868991@qq.com>
      Co-authored-by: Nmjxs <52824616+kk-2000@users.noreply.github.com>
      Co-authored-by: 学渣戊's avatar学渣戊 <x19403@163.com>
      1acddc34
    • W
      Fluid clean (#48841) · b8814777
      wangxiaoning 提交于
      * add index sample fp16 support
      
      * remove fluid APIs in distributed_strategy.py and role_maker.py
      
      * Revert "remove fluid APIs in distributed_strategy.py and role_maker.py"
      
      This reverts commit 223bbee990d3bf69e252fc3c0f19e3873550a264.
      
      * remove fluid APIs in distributed_strategy.py and role_maker.py
      
      * remove index sample op changes
      
      * remove fluid APIs under fleet.base
      
      * remove fluid APIs under fleet.layers.mpu
      
      * remove fluid APIs under fleet.meta_optimizers
      
      * fix fluid error
      
      * fix util_factory.py
      
      * reset fluid.io.load_inference_model API
      
      * remove dygraph.parallel.prepare_context
      
      * remove fluid.dygraph.StaticModelRunner API
      
      * remove split_lod_tensor merge_lod_tensor
      
      * remove unittests
      b8814777
  27. 20 12月, 2022 1 次提交
  28. 19 12月, 2022 1 次提交
  29. 15 12月, 2022 1 次提交
    • [FluidAPI] remove fluid rnn apis (#49050) · 4672ea8e
      骑马小猫 提交于
      * remove lstm api
      
      * remove gru_unit api
      
      * remove lstm in all
      
      * remove beam-search
      
      * remove beam_search slot
      
      * remove lstm test code
      
      * remove fluid.layers.nn api
      
      * update gru-unit
      
      * revert gru_unit white list
      4672ea8e
  30. 13 12月, 2022 1 次提交
  31. 08 12月, 2022 1 次提交