未验证 提交 eaed1681 编写于 作者: G gouzil 提交者: GitHub

[static op generation] coalesce_tensor (#53570)

* [phi][api] add autogen code coalesce_tensor

* [phi][api]fix args

* [phi][api] supplement attrs
上级 9244ceb6
此差异已折叠。
......@@ -171,15 +171,6 @@
data_type : x
inplace : (x -> out), (input_found_infinite -> output_found_infinite)
- op : coalesce_tensor
args : (Tensor[] input, DataType dtype, bool copy_data = false, bool set_constant = false, bool persist_output = false, float constant = 0.0, bool use_align = true, int align_size = -1, int size_of_dtype = -1, int64_t[] concated_shapes = {}, int64_t[] concated_ranks = {})
output : Tensor[](output){input.size()}, Tensor(fused_output)
infer_meta :
func : CoalesceTensorInferMeta
kernel :
func : coalesce_tensor
data_type : dtype
- op : concat
args : (Tensor[] x, Scalar(int64_t) axis)
output : Tensor
......
......@@ -419,6 +419,14 @@
outputs :
out : Out
- op : coalesce_tensor
inputs :
{input : Input}
outputs :
{output : Output, fused_output : FusedOutput}
attrs :
{size_of_dtype : user_defined_size_of_dtype}
- op : complex
backward : complex_grad
inputs :
......
......@@ -72,6 +72,24 @@
- add_input : Max
comment : Pass the mix, min value as input, not attribute. Max is dispensable.
- op : coalesce_tensor
version :
- checkpoint : "Upgrade coalesce_tensor: add a new attribute [use_align]."
action :
- add_attr : use_align
comment : In order to optionally take memory alignment into account when
coalescing tensors. The default value is true to be compatible
with before.
default : "true"
- checkpoint : "Upgrade coalesce_tensor: add a new attribute [align_size]."
action :
- add_attr : align_size
comment : In order to optionally take memory alignment into account when
coalescing tensors. The default value is -1 and use the default
align_size
of each place to be compatible with before.
default : -1
- op : embedding
version :
- checkpoint : Upgrade flip, add new attr [axis] and delete attr [dims]
......
......@@ -416,6 +416,15 @@
func : clip_by_norm {dense -> dense}
clip_by_norm_sr {selected_rows -> selected_rows}
- op : coalesce_tensor
args : (Tensor[] input, DataType dtype, bool copy_data = false, bool set_constant = false, bool persist_output = false, float constant = 0.0, bool use_align = true, int align_size = -1, int size_of_dtype = -1, int64_t[] concated_shapes = {}, int64_t[] concated_ranks = {})
output : Tensor[](output){input.size()}, Tensor(fused_output)
infer_meta :
func : CoalesceTensorInferMeta
kernel :
func : coalesce_tensor
data_type : dtype
- op : complex
args : (Tensor real, Tensor imag)
output : Tensor
......
......@@ -273,6 +273,7 @@ PD_REGISTER_KERNEL(coalesce_tensor,
int,
float,
double) {
kernel->InputAt(0).SetBackend(phi::Backend::ALL_BACKEND);
kernel->OutputAt(1).SetDataType(phi::DataType::UNDEFINED);
}
......
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "paddle/phi/core/compat/op_utils.h"
namespace phi {
KernelSignature CoalesceTensorOpArgumentMapping(
const ArgumentMappingContext& ctx) {
return KernelSignature("coalesce_tensor",
{"Input"},
{"dtype",
"copy_data",
"set_constant",
"persist_output",
"constant",
"use_align",
"align_size",
"user_defined_size_of_dtype",
"concated_shapes",
"concated_ranks"},
{"Output", "FusedOutput"});
}
} // namespace phi
PD_REGISTER_ARG_MAPPING_FN(coalesce_tensor,
phi::CoalesceTensorOpArgumentMapping);
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册