Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
ec749398
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ec749398
编写于
10月 17, 2022
作者:
Y
YuanRisheng
提交者:
GitHub
10月 17, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[PHI]Modify DataLayout's namespace from paddle::experimental to phi (#46869)
* namespace modify * update by comment
上级
d43c972c
变更
191
展开全部
隐藏空白更改
内联
并排
Showing
191 changed file
with
637 addition
and
773 deletion
+637
-773
paddle/fluid/eager/auto_code_generator/generator/codegen_utils.py
...luid/eager/auto_code_generator/generator/codegen_utils.py
+1
-1
paddle/fluid/eager/eager_layout_auto_tune.h
paddle/fluid/eager/eager_layout_auto_tune.h
+15
-18
paddle/fluid/eager/eager_layout_transformer.h
paddle/fluid/eager/eager_layout_transformer.h
+18
-23
paddle/fluid/eager/tests/data_structure_tests/eager_tensor_test.cc
...uid/eager/tests/data_structure_tests/eager_tensor_test.cc
+7
-9
paddle/fluid/framework/convert_utils.h
paddle/fluid/framework/convert_utils.h
+1
-1
paddle/fluid/framework/data_layout_transform.h
paddle/fluid/framework/data_layout_transform.h
+1
-1
paddle/fluid/framework/data_layout_transform_test.cc
paddle/fluid/framework/data_layout_transform_test.cc
+5
-5
paddle/fluid/framework/data_type_transform_test.cc
paddle/fluid/framework/data_type_transform_test.cc
+7
-7
paddle/fluid/framework/data_type_transform_test.cu
paddle/fluid/framework/data_type_transform_test.cu
+6
-6
paddle/fluid/framework/details/fetch_async_op_handle.cc
paddle/fluid/framework/details/fetch_async_op_handle.cc
+3
-3
paddle/fluid/framework/lod_tensor.cc
paddle/fluid/framework/lod_tensor.cc
+3
-3
paddle/fluid/framework/new_executor/data_transfer.cc
paddle/fluid/framework/new_executor/data_transfer.cc
+4
-4
paddle/fluid/framework/new_executor/new_executor_defs.cc
paddle/fluid/framework/new_executor/new_executor_defs.cc
+1
-1
paddle/fluid/framework/op_kernel_type.h
paddle/fluid/framework/op_kernel_type.h
+2
-0
paddle/fluid/framework/op_kernel_type_test.cc
paddle/fluid/framework/op_kernel_type_test.cc
+2
-2
paddle/fluid/framework/op_registry.h
paddle/fluid/framework/op_registry.h
+2
-2
paddle/fluid/framework/operator.cc
paddle/fluid/framework/operator.cc
+1
-1
paddle/fluid/framework/operator_test.cc
paddle/fluid/framework/operator_test.cc
+3
-4
paddle/fluid/framework/phi_utils_test.cc
paddle/fluid/framework/phi_utils_test.cc
+6
-6
paddle/fluid/framework/tensor_test.cc
paddle/fluid/framework/tensor_test.cc
+3
-3
paddle/fluid/framework/tensor_util.cc
paddle/fluid/framework/tensor_util.cc
+1
-2
paddle/fluid/imperative/infer_shape_context.h
paddle/fluid/imperative/infer_shape_context.h
+1
-1
paddle/fluid/imperative/layer.h
paddle/fluid/imperative/layer.h
+2
-4
paddle/fluid/imperative/layout_autotune.cc
paddle/fluid/imperative/layout_autotune.cc
+1
-1
paddle/fluid/imperative/layout_autotune.h
paddle/fluid/imperative/layout_autotune.h
+1
-1
paddle/fluid/imperative/layout_transformer.h
paddle/fluid/imperative/layout_transformer.h
+8
-10
paddle/fluid/imperative/var_helper.cc
paddle/fluid/imperative/var_helper.cc
+10
-15
paddle/fluid/imperative/var_helper.h
paddle/fluid/imperative/var_helper.h
+2
-3
paddle/fluid/imperative/variable_wrapper.h
paddle/fluid/imperative/variable_wrapper.h
+3
-6
paddle/fluid/inference/api/details/zero_copy_tensor.cc
paddle/fluid/inference/api/details/zero_copy_tensor.cc
+11
-12
paddle/fluid/inference/lite/tensor_utils.cc
paddle/fluid/inference/lite/tensor_utils.cc
+3
-3
paddle/fluid/inference/lite/test_tensor_utils.cc
paddle/fluid/inference/lite/test_tensor_utils.cc
+2
-3
paddle/fluid/inference/tensorrt/convert/bilinear_interp_v2_op.cc
...fluid/inference/tensorrt/convert/bilinear_interp_v2_op.cc
+5
-6
paddle/fluid/inference/tensorrt/convert/nearest_interp_op.cc
paddle/fluid/inference/tensorrt/convert/nearest_interp_op.cc
+6
-8
paddle/fluid/inference/tensorrt/convert/nearest_interp_v2_op.cc
.../fluid/inference/tensorrt/convert/nearest_interp_v2_op.cc
+5
-6
paddle/fluid/inference/tensorrt/op_teller.cc
paddle/fluid/inference/tensorrt/op_teller.cc
+11
-11
paddle/fluid/inference/tensorrt/plugin/group_norm_op_plugin.cu
...e/fluid/inference/tensorrt/plugin/group_norm_op_plugin.cu
+1
-1
paddle/fluid/operators/activation_op.cc
paddle/fluid/operators/activation_op.cc
+6
-6
paddle/fluid/operators/affine_channel_op.cc
paddle/fluid/operators/affine_channel_op.cc
+13
-15
paddle/fluid/operators/affine_channel_op.cu
paddle/fluid/operators/affine_channel_op.cu
+18
-20
paddle/fluid/operators/affine_channel_op_xpu.cc
paddle/fluid/operators/affine_channel_op_xpu.cc
+8
-10
paddle/fluid/operators/affine_grid_op.cc
paddle/fluid/operators/affine_grid_op.cc
+2
-2
paddle/fluid/operators/batch_norm_op.cc
paddle/fluid/operators/batch_norm_op.cc
+14
-14
paddle/fluid/operators/batch_norm_op.cu
paddle/fluid/operators/batch_norm_op.cu
+1
-1
paddle/fluid/operators/batch_norm_op.h
paddle/fluid/operators/batch_norm_op.h
+1
-1
paddle/fluid/operators/batch_norm_op_mlu.cc
paddle/fluid/operators/batch_norm_op_mlu.cc
+2
-2
paddle/fluid/operators/batch_norm_op_npu.cc
paddle/fluid/operators/batch_norm_op_npu.cc
+2
-2
paddle/fluid/operators/bilateral_slice_op.cc
paddle/fluid/operators/bilateral_slice_op.cc
+1
-1
paddle/fluid/operators/bilateral_slice_op.cu
paddle/fluid/operators/bilateral_slice_op.cu
+1
-1
paddle/fluid/operators/cast_op.cc
paddle/fluid/operators/cast_op.cc
+1
-1
paddle/fluid/operators/controlflow/fetch_op.cc
paddle/fluid/operators/controlflow/fetch_op.cc
+2
-2
paddle/fluid/operators/controlflow/fetch_v2_op.cc
paddle/fluid/operators/controlflow/fetch_v2_op.cc
+2
-2
paddle/fluid/operators/conv_op.cc
paddle/fluid/operators/conv_op.cc
+13
-13
paddle/fluid/operators/conv_op_mlu.cc
paddle/fluid/operators/conv_op_mlu.cc
+1
-1
paddle/fluid/operators/conv_transpose_op.cc
paddle/fluid/operators/conv_transpose_op.cc
+8
-8
paddle/fluid/operators/conv_transpose_op_mlu.cc
paddle/fluid/operators/conv_transpose_op_mlu.cc
+3
-4
paddle/fluid/operators/conv_transpose_op_npu.cc
paddle/fluid/operators/conv_transpose_op_npu.cc
+2
-3
paddle/fluid/operators/data_norm_op.cc
paddle/fluid/operators/data_norm_op.cc
+7
-9
paddle/fluid/operators/data_norm_op.cu
paddle/fluid/operators/data_norm_op.cu
+1
-1
paddle/fluid/operators/dequantize_op.cc
paddle/fluid/operators/dequantize_op.cc
+1
-1
paddle/fluid/operators/detection/prior_box_op.cc
paddle/fluid/operators/detection/prior_box_op.cc
+1
-1
paddle/fluid/operators/elementwise/elementwise_op.h
paddle/fluid/operators/elementwise/elementwise_op.h
+5
-7
paddle/fluid/operators/elementwise/mkldnn/elementwise_mkldnn_op.h
...luid/operators/elementwise/mkldnn/elementwise_mkldnn_op.h
+1
-1
paddle/fluid/operators/fc_op.cc
paddle/fluid/operators/fc_op.cc
+1
-1
paddle/fluid/operators/fsp_op.cc
paddle/fluid/operators/fsp_op.cc
+1
-1
paddle/fluid/operators/fused/fused_bn_activation_op.cc
paddle/fluid/operators/fused/fused_bn_activation_op.cc
+2
-2
paddle/fluid/operators/fused/fused_bn_add_activation_op.cc
paddle/fluid/operators/fused/fused_bn_add_activation_op.cc
+2
-2
paddle/fluid/operators/fused/fused_gemm_epilogue_op.cc
paddle/fluid/operators/fused/fused_gemm_epilogue_op.cc
+2
-2
paddle/fluid/operators/fused/multi_gru_op.cc
paddle/fluid/operators/fused/multi_gru_op.cc
+1
-1
paddle/fluid/operators/fused/resnet_basic_block_op.cc
paddle/fluid/operators/fused/resnet_basic_block_op.cc
+2
-2
paddle/fluid/operators/fused/resnet_unit_op.cc
paddle/fluid/operators/fused/resnet_unit_op.cc
+2
-2
paddle/fluid/operators/grid_sampler_op.cc
paddle/fluid/operators/grid_sampler_op.cc
+2
-2
paddle/fluid/operators/grid_sampler_op_mlu.cc
paddle/fluid/operators/grid_sampler_op_mlu.cc
+1
-2
paddle/fluid/operators/group_norm_op.cc
paddle/fluid/operators/group_norm_op.cc
+1
-1
paddle/fluid/operators/group_norm_op.cu
paddle/fluid/operators/group_norm_op.cu
+3
-5
paddle/fluid/operators/group_norm_op.h
paddle/fluid/operators/group_norm_op.h
+3
-5
paddle/fluid/operators/group_norm_op_npu.cc
paddle/fluid/operators/group_norm_op_npu.cc
+2
-4
paddle/fluid/operators/inplace_abn_op.cc
paddle/fluid/operators/inplace_abn_op.cc
+4
-4
paddle/fluid/operators/instance_norm_op.h
paddle/fluid/operators/instance_norm_op.h
+1
-1
paddle/fluid/operators/instance_norm_op_npu.cc
paddle/fluid/operators/instance_norm_op_npu.cc
+2
-2
paddle/fluid/operators/interpolate_op.cc
paddle/fluid/operators/interpolate_op.cc
+11
-11
paddle/fluid/operators/interpolate_op.cu
paddle/fluid/operators/interpolate_op.cu
+7
-7
paddle/fluid/operators/interpolate_op.h
paddle/fluid/operators/interpolate_op.h
+7
-7
paddle/fluid/operators/interpolate_op_npu.cc
paddle/fluid/operators/interpolate_op_npu.cc
+3
-5
paddle/fluid/operators/interpolate_v2_op.cc
paddle/fluid/operators/interpolate_v2_op.cc
+11
-11
paddle/fluid/operators/interpolate_v2_op_mlu.cc
paddle/fluid/operators/interpolate_v2_op_mlu.cc
+3
-5
paddle/fluid/operators/interpolate_v2_op_npu.cc
paddle/fluid/operators/interpolate_v2_op_npu.cc
+3
-5
paddle/fluid/operators/layer_norm_op.cc
paddle/fluid/operators/layer_norm_op.cc
+3
-3
paddle/fluid/operators/layer_norm_op_npu.cc
paddle/fluid/operators/layer_norm_op_npu.cc
+1
-1
paddle/fluid/operators/lrn_op.cc
paddle/fluid/operators/lrn_op.cc
+9
-9
paddle/fluid/operators/lrn_op.cu
paddle/fluid/operators/lrn_op.cu
+1
-1
paddle/fluid/operators/lrn_op.h
paddle/fluid/operators/lrn_op.h
+5
-5
paddle/fluid/operators/math/im2col.h
paddle/fluid/operators/math/im2col.h
+1
-1
paddle/fluid/operators/math/vol2col.h
paddle/fluid/operators/math/vol2col.h
+1
-1
paddle/fluid/operators/matmul_op.cc
paddle/fluid/operators/matmul_op.cc
+5
-7
paddle/fluid/operators/matmul_v2_op.cc
paddle/fluid/operators/matmul_v2_op.cc
+4
-6
paddle/fluid/operators/matrix_rank_op.cc
paddle/fluid/operators/matrix_rank_op.cc
+1
-1
paddle/fluid/operators/mkldnn/activation_mkldnn_op.cc
paddle/fluid/operators/mkldnn/activation_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/conv_mkldnn_op.cc
paddle/fluid/operators/mkldnn/conv_mkldnn_op.cc
+12
-12
paddle/fluid/operators/mkldnn/conv_transpose_mkldnn_op.cc
paddle/fluid/operators/mkldnn/conv_transpose_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/dequantize_mkldnn_op.cc
paddle/fluid/operators/mkldnn/dequantize_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/fc_mkldnn_op.cc
paddle/fluid/operators/mkldnn/fc_mkldnn_op.cc
+0
-1
paddle/fluid/operators/mkldnn/interpolate_mkldnn_op.cc
paddle/fluid/operators/mkldnn/interpolate_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/lrn_mkldnn_op.cc
paddle/fluid/operators/mkldnn/lrn_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/matmul_v2_mkldnn_op.cc
paddle/fluid/operators/mkldnn/matmul_v2_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/mul_mkldnn_op.cc
paddle/fluid/operators/mkldnn/mul_mkldnn_op.cc
+0
-1
paddle/fluid/operators/mkldnn/pool_mkldnn_op.cc
paddle/fluid/operators/mkldnn/pool_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/quantize_mkldnn_op.cc
paddle/fluid/operators/mkldnn/quantize_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mkldnn/reshape_mkldnn_op.cc
paddle/fluid/operators/mkldnn/reshape_mkldnn_op.cc
+2
-2
paddle/fluid/operators/mkldnn/transpose_mkldnn_op.cc
paddle/fluid/operators/mkldnn/transpose_mkldnn_op.cc
+1
-1
paddle/fluid/operators/mlu/mlu_baseop.h
paddle/fluid/operators/mlu/mlu_baseop.h
+1
-1
paddle/fluid/operators/mode_op.cc
paddle/fluid/operators/mode_op.cc
+1
-1
paddle/fluid/operators/mul_op.cc
paddle/fluid/operators/mul_op.cc
+2
-2
paddle/fluid/operators/norm_utils.cu.h
paddle/fluid/operators/norm_utils.cu.h
+21
-21
paddle/fluid/operators/norm_utils.h
paddle/fluid/operators/norm_utils.h
+1
-1
paddle/fluid/operators/optimizers/sgd_op.cc
paddle/fluid/operators/optimizers/sgd_op.cc
+1
-1
paddle/fluid/operators/pad2d_op.cc
paddle/fluid/operators/pad2d_op.cc
+6
-7
paddle/fluid/operators/pad3d_op.cc
paddle/fluid/operators/pad3d_op.cc
+6
-7
paddle/fluid/operators/pool_op.cc
paddle/fluid/operators/pool_op.cc
+11
-11
paddle/fluid/operators/prelu_op.cc
paddle/fluid/operators/prelu_op.cc
+8
-8
paddle/fluid/operators/quantize_op.cc
paddle/fluid/operators/quantize_op.cc
+1
-1
paddle/fluid/operators/reduce_ops/reduce_op.h
paddle/fluid/operators/reduce_ops/reduce_op.h
+2
-2
paddle/fluid/operators/requantize_op.cc
paddle/fluid/operators/requantize_op.cc
+1
-1
paddle/fluid/operators/roi_align_op_mlu.cc
paddle/fluid/operators/roi_align_op_mlu.cc
+1
-1
paddle/fluid/operators/sequence_ops/sequence_softmax_op.cc
paddle/fluid/operators/sequence_ops/sequence_softmax_op.cc
+2
-2
paddle/fluid/operators/slice_op.cc
paddle/fluid/operators/slice_op.cc
+2
-2
paddle/fluid/operators/softmax_op.cc
paddle/fluid/operators/softmax_op.cc
+2
-2
paddle/fluid/operators/split_op.cc
paddle/fluid/operators/split_op.cc
+1
-1
paddle/fluid/operators/squeeze_op.cc
paddle/fluid/operators/squeeze_op.cc
+4
-4
paddle/fluid/operators/sum_op.cc
paddle/fluid/operators/sum_op.cc
+1
-1
paddle/fluid/operators/sync_batch_norm_op_mlu.cc
paddle/fluid/operators/sync_batch_norm_op_mlu.cc
+2
-2
paddle/fluid/operators/sync_batch_norm_op_npu.cc
paddle/fluid/operators/sync_batch_norm_op_npu.cc
+30
-30
paddle/fluid/operators/temporal_shift_op.cu
paddle/fluid/operators/temporal_shift_op.cu
+2
-4
paddle/fluid/operators/temporal_shift_op.h
paddle/fluid/operators/temporal_shift_op.h
+2
-3
paddle/fluid/operators/tensor_formatter.cc
paddle/fluid/operators/tensor_formatter.cc
+1
-2
paddle/fluid/operators/top_k_op.cc
paddle/fluid/operators/top_k_op.cc
+1
-1
paddle/fluid/operators/top_k_v2_op.cc
paddle/fluid/operators/top_k_v2_op.cc
+1
-1
paddle/fluid/operators/transfer_layout_op.h
paddle/fluid/operators/transfer_layout_op.h
+1
-1
paddle/fluid/operators/transpose_op.cc
paddle/fluid/operators/transpose_op.cc
+5
-5
paddle/fluid/operators/truncated_gaussian_random_op.cc
paddle/fluid/operators/truncated_gaussian_random_op.cc
+1
-1
paddle/fluid/operators/warpctc_op.cc
paddle/fluid/operators/warpctc_op.cc
+1
-1
paddle/fluid/platform/device/npu/npu_op_runner.h
paddle/fluid/platform/device/npu/npu_op_runner.h
+1
-1
paddle/fluid/platform/mkldnn_helper.h
paddle/fluid/platform/mkldnn_helper.h
+13
-14
paddle/fluid/platform/mkldnn_reuse.h
paddle/fluid/platform/mkldnn_reuse.h
+0
-2
paddle/fluid/pybind/eager_properties.cc
paddle/fluid/pybind/eager_properties.cc
+3
-6
paddle/fluid/pybind/imperative.cc
paddle/fluid/pybind/imperative.cc
+5
-5
paddle/fluid/pybind/tensor.cc
paddle/fluid/pybind/tensor.cc
+2
-2
paddle/infrt/host_context/value.h
paddle/infrt/host_context/value.h
+1
-1
paddle/phi/api/include/tensor.h
paddle/phi/api/include/tensor.h
+1
-1
paddle/phi/capi/include/type_utils.h
paddle/phi/capi/include/type_utils.h
+6
-7
paddle/phi/common/layout.h
paddle/phi/common/layout.h
+9
-18
paddle/phi/core/dense_tensor.inl
paddle/phi/core/dense_tensor.inl
+1
-1
paddle/phi/core/dense_tensor_impl.cc
paddle/phi/core/dense_tensor_impl.cc
+1
-3
paddle/phi/core/kernel_factory.h
paddle/phi/core/kernel_factory.h
+0
-1
paddle/phi/core/tensor_meta.h
paddle/phi/core/tensor_meta.h
+0
-3
paddle/phi/infermeta/binary.cc
paddle/phi/infermeta/binary.cc
+3
-4
paddle/phi/infermeta/multiary.cc
paddle/phi/infermeta/multiary.cc
+4
-8
paddle/phi/infermeta/ternary.cc
paddle/phi/infermeta/ternary.cc
+1
-2
paddle/phi/kernels/cpu/batch_norm_grad_kernel.cc
paddle/phi/kernels/cpu/batch_norm_grad_kernel.cc
+2
-4
paddle/phi/kernels/cpu/batch_norm_kernel.cc
paddle/phi/kernels/cpu/batch_norm_kernel.cc
+1
-1
paddle/phi/kernels/cpu/group_norm_grad_kernel.cc
paddle/phi/kernels/cpu/group_norm_grad_kernel.cc
+1
-2
paddle/phi/kernels/cpu/group_norm_kernel.cc
paddle/phi/kernels/cpu/group_norm_kernel.cc
+1
-2
paddle/phi/kernels/cpu/interpolate_grad_kernel.cc
paddle/phi/kernels/cpu/interpolate_grad_kernel.cc
+3
-6
paddle/phi/kernels/cpu/interpolate_kernel.cc
paddle/phi/kernels/cpu/interpolate_kernel.cc
+3
-6
paddle/phi/kernels/cpu/temporal_shift_grad_kernel.cc
paddle/phi/kernels/cpu/temporal_shift_grad_kernel.cc
+1
-2
paddle/phi/kernels/cpu/temporal_shift_kernel.cc
paddle/phi/kernels/cpu/temporal_shift_kernel.cc
+1
-2
paddle/phi/kernels/funcs/data_layout_transform.h
paddle/phi/kernels/funcs/data_layout_transform.h
+3
-3
paddle/phi/kernels/gpu/batch_norm_grad_kernel.cu
paddle/phi/kernels/gpu/batch_norm_grad_kernel.cu
+2
-4
paddle/phi/kernels/gpu/batch_norm_kernel.cu
paddle/phi/kernels/gpu/batch_norm_kernel.cu
+1
-2
paddle/phi/kernels/gpu/conv_transpose_grad_kernel.cu
paddle/phi/kernels/gpu/conv_transpose_grad_kernel.cu
+1
-2
paddle/phi/kernels/gpu/conv_transpose_kernel.cu
paddle/phi/kernels/gpu/conv_transpose_kernel.cu
+1
-2
paddle/phi/kernels/gpu/depthwise_conv.h
paddle/phi/kernels/gpu/depthwise_conv.h
+0
-2
paddle/phi/kernels/gpu/depthwise_conv_grad_kernel.cu
paddle/phi/kernels/gpu/depthwise_conv_grad_kernel.cu
+2
-3
paddle/phi/kernels/gpu/depthwise_conv_kernel.cu
paddle/phi/kernels/gpu/depthwise_conv_kernel.cu
+2
-3
paddle/phi/kernels/gpu/group_norm_grad_kernel.cu
paddle/phi/kernels/gpu/group_norm_grad_kernel.cu
+1
-2
paddle/phi/kernels/gpu/group_norm_kernel.cu
paddle/phi/kernels/gpu/group_norm_kernel.cu
+1
-2
paddle/phi/kernels/gpu/interpolate_grad_kernel.cu
paddle/phi/kernels/gpu/interpolate_grad_kernel.cu
+3
-6
paddle/phi/kernels/gpu/interpolate_kernel.cu
paddle/phi/kernels/gpu/interpolate_kernel.cu
+3
-6
paddle/phi/kernels/gpu/sync_batch_norm_kernel.cu
paddle/phi/kernels/gpu/sync_batch_norm_kernel.cu
+7
-8
paddle/phi/kernels/gpu/sync_batch_norm_utils.h
paddle/phi/kernels/gpu/sync_batch_norm_utils.h
+1
-2
paddle/phi/kernels/gpu/temporal_shift_grad_kernel.cu
paddle/phi/kernels/gpu/temporal_shift_grad_kernel.cu
+1
-2
paddle/phi/kernels/gpu/temporal_shift_kernel.cu
paddle/phi/kernels/gpu/temporal_shift_kernel.cu
+1
-2
paddle/phi/kernels/impl/conv_transpose_grad_kernel_impl.h
paddle/phi/kernels/impl/conv_transpose_grad_kernel_impl.h
+1
-2
paddle/phi/kernels/impl/conv_transpose_kernel_impl.h
paddle/phi/kernels/impl/conv_transpose_kernel_impl.h
+1
-2
paddle/phi/kernels/xpu/batch_norm_grad_kernel.cc
paddle/phi/kernels/xpu/batch_norm_grad_kernel.cc
+1
-2
paddle/phi/kernels/xpu/batch_norm_kernel.cc
paddle/phi/kernels/xpu/batch_norm_kernel.cc
+1
-2
paddle/phi/kernels/xpu/grid_sample_kernel.cc
paddle/phi/kernels/xpu/grid_sample_kernel.cc
+1
-2
paddle/phi/kernels/xpu/interpolate_grad_kernel.cc
paddle/phi/kernels/xpu/interpolate_grad_kernel.cc
+1
-2
paddle/phi/kernels/xpu/interpolate_kernel.cc
paddle/phi/kernels/xpu/interpolate_kernel.cc
+1
-2
paddle/phi/kernels/xpu/temporal_shift_grad_kernel.cc
paddle/phi/kernels/xpu/temporal_shift_grad_kernel.cc
+1
-2
paddle/phi/kernels/xpu/temporal_shift_kernel.cc
paddle/phi/kernels/xpu/temporal_shift_kernel.cc
+1
-2
未找到文件。
paddle/fluid/eager/auto_code_generator/generator/codegen_utils.py
浏览文件 @
ec749398
...
@@ -44,7 +44,7 @@ yaml_types_mapping = {
...
@@ -44,7 +44,7 @@ yaml_types_mapping = {
'float'
:
'float'
,
'double'
:
'double'
,
'bool'
:
'bool'
,
\
'float'
:
'float'
,
'double'
:
'double'
,
'bool'
:
'bool'
,
\
'str'
:
'std::string'
,
\
'str'
:
'std::string'
,
\
'str[]'
:
'std::vector<std::string>'
,
'float[]'
:
'std::vector<float>'
,
\
'str[]'
:
'std::vector<std::string>'
,
'float[]'
:
'std::vector<float>'
,
\
'Place'
:
'paddle::Place'
,
'DataLayout'
:
'p
addle::experimental
::DataLayout'
,
'DataType'
:
'paddle::experimental::DataType'
,
\
'Place'
:
'paddle::Place'
,
'DataLayout'
:
'p
hi
::DataLayout'
,
'DataType'
:
'paddle::experimental::DataType'
,
\
'int64_t[]'
:
'std::vector<int64_t>'
,
'int[]'
:
'std::vector<int>'
,
'int64_t[]'
:
'std::vector<int64_t>'
,
'int[]'
:
'std::vector<int>'
,
'Tensor'
:
'Tensor'
,
'Tensor'
:
'Tensor'
,
'Tensor[]'
:
'std::vector<Tensor>'
,
'Tensor[]'
:
'std::vector<Tensor>'
,
...
...
paddle/fluid/eager/eager_layout_auto_tune.h
浏览文件 @
ec749398
...
@@ -22,7 +22,7 @@ namespace egr {
...
@@ -22,7 +22,7 @@ namespace egr {
inline
bool
NeedTransLayout
(
inline
bool
NeedTransLayout
(
const
paddle
::
small_vector
<
std
::
vector
<
paddle
::
experimental
::
Tensor
>
,
const
paddle
::
small_vector
<
std
::
vector
<
paddle
::
experimental
::
Tensor
>
,
kSlotSmallVectorSize
>&
tensors_vector
,
kSlotSmallVectorSize
>&
tensors_vector
,
const
p
addle
::
experimental
::
DataLayout
&
layout
)
{
const
p
hi
::
DataLayout
&
layout
)
{
for
(
size_t
i
=
0
;
i
<
tensors_vector
.
size
();
i
++
)
{
for
(
size_t
i
=
0
;
i
<
tensors_vector
.
size
();
i
++
)
{
for
(
size_t
idx
=
0
;
idx
<
tensors_vector
[
0
].
size
();
idx
++
)
{
for
(
size_t
idx
=
0
;
idx
<
tensors_vector
[
0
].
size
();
idx
++
)
{
if
(
layout
!=
tensors_vector
[
i
][
idx
].
layout
())
{
if
(
layout
!=
tensors_vector
[
i
][
idx
].
layout
())
{
...
@@ -40,8 +40,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
...
@@ -40,8 +40,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
// For agnostic op like add, relu, exp
// For agnostic op like add, relu, exp
auto
first_layout
=
tensors_vector
[
0
][
0
].
layout
();
auto
first_layout
=
tensors_vector
[
0
][
0
].
layout
();
auto
desired_layout
=
DesiredLayout
();
auto
desired_layout
=
DesiredLayout
();
bool
is_started
=
bool
is_started
=
!
(
desired_layout
==
phi
::
DataLayout
::
UNDEFINED
);
!
(
desired_layout
==
paddle
::
experimental
::
DataLayout
::
UNDEFINED
);
if
(
is_started
&&
NeedTransLayout
(
tensors_vector
,
first_layout
))
{
if
(
is_started
&&
NeedTransLayout
(
tensors_vector
,
first_layout
))
{
bool
need_trans_back
=
false
;
bool
need_trans_back
=
false
;
for
(
size_t
i
=
0
;
i
<
tensors_vector
.
size
();
i
++
)
{
for
(
size_t
i
=
0
;
i
<
tensors_vector
.
size
();
i
++
)
{
...
@@ -68,7 +67,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
...
@@ -68,7 +67,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
kSlotSmallVectorSize
>&
tensors_vector
,
kSlotSmallVectorSize
>&
tensors_vector
,
T
*
attr
)
{
T
*
attr
)
{
// For lightly op like reduce
// For lightly op like reduce
if
(
!
(
DesiredLayout
()
==
p
addle
::
experimental
::
DataLayout
::
UNDEFINED
))
{
if
(
!
(
DesiredLayout
()
==
p
hi
::
DataLayout
::
UNDEFINED
))
{
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
...
@@ -96,7 +95,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
...
@@ -96,7 +95,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
// Heavily op with (string) data_format, data_layout
// Heavily op with (string) data_format, data_layout
auto
transposer
=
std
::
make_shared
<
EagerLayoutTransformer
>
(
auto
transposer
=
std
::
make_shared
<
EagerLayoutTransformer
>
(
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
if
(
DesiredLayout
()
==
p
addle
::
experimental
::
DataLayout
::
UNDEFINED
)
{
if
(
DesiredLayout
()
==
p
hi
::
DataLayout
::
UNDEFINED
)
{
// Layout autotune only supports model with convolutional layers
// Layout autotune only supports model with convolutional layers
if
(
op_name
!=
"conv2d"
)
{
if
(
op_name
!=
"conv2d"
)
{
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
...
@@ -113,15 +112,15 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
...
@@ -113,15 +112,15 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
<<
op_name
;
<<
op_name
;
if
(
is_tune_fp32
)
{
if
(
is_tune_fp32
)
{
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDesiredLayout
(
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDesiredLayout
(
p
addle
::
experimental
::
DataLayout
::
NCHW
);
p
hi
::
DataLayout
::
NCHW
);
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDefaultLayout
(
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDefaultLayout
(
p
addle
::
experimental
::
DataLayout
::
NHWC
);
p
hi
::
DataLayout
::
NHWC
);
}
else
if
(
is_tune_fp16
)
{
}
else
if
(
is_tune_fp16
)
{
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDesiredLayout
(
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDesiredLayout
(
p
addle
::
experimental
::
DataLayout
::
NHWC
);
p
hi
::
DataLayout
::
NHWC
);
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDefaultLayout
(
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
SetDefaultLayout
(
p
addle
::
experimental
::
DataLayout
::
NCHW
);
p
hi
::
DataLayout
::
NCHW
);
}
else
{
}
else
{
VLOG
(
4
)
<<
"DisableLayoutAutoTune accoding to Conv op"
VLOG
(
4
)
<<
"DisableLayoutAutoTune accoding to Conv op"
<<
" dtype : "
<<
data_type
<<
" format : "
<<
(
*
attr
);
<<
" dtype : "
<<
data_type
<<
" format : "
<<
(
*
attr
);
...
@@ -147,7 +146,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
...
@@ -147,7 +146,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
kSlotSmallVectorSize
>&
tensors_vector
,
kSlotSmallVectorSize
>&
tensors_vector
,
std
::
vector
<
int
>*
attr
)
{
std
::
vector
<
int
>*
attr
)
{
// lightly transpose
// lightly transpose
if
(
DesiredLayout
()
==
p
addle
::
experimental
::
DataLayout
::
UNDEFINED
)
{
if
(
DesiredLayout
()
==
p
hi
::
DataLayout
::
UNDEFINED
)
{
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
...
@@ -157,8 +156,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
...
@@ -157,8 +156,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune(
(
tensors_vector
[
0
][
0
].
layout
()
==
DesiredLayout
()))
{
(
tensors_vector
[
0
][
0
].
layout
()
==
DesiredLayout
()))
{
auto
trans
=
std
::
make_shared
<
EagerTransposeOpTransformer
>
(
op_name
);
auto
trans
=
std
::
make_shared
<
EagerTransposeOpTransformer
>
(
op_name
);
trans
->
SetAttr
(
attr
,
trans
->
SetAttr
(
attr
,
tensors_vector
[
0
][
0
].
layout
()
==
tensors_vector
[
0
][
0
].
layout
()
==
phi
::
DataLayout
::
NHWC
);
paddle
::
experimental
::
DataLayout
::
NHWC
);
return
trans
;
return
trans
;
}
}
return
std
::
make_shared
<
EagerLightlyLayoutSensitiveOpTransformer
>
(
op_name
);
return
std
::
make_shared
<
EagerLightlyLayoutSensitiveOpTransformer
>
(
op_name
);
...
@@ -173,7 +171,7 @@ EagerLayoutAutotune<paddle::experimental::Scalar, bool>(
...
@@ -173,7 +171,7 @@ EagerLayoutAutotune<paddle::experimental::Scalar, bool>(
kSlotSmallVectorSize
>&
tensors_vector
,
kSlotSmallVectorSize
>&
tensors_vector
,
paddle
::
experimental
::
Scalar
*
axis
,
paddle
::
experimental
::
Scalar
*
axis
,
bool
*
keep_dim
)
{
bool
*
keep_dim
)
{
if
(
DesiredLayout
()
==
p
addle
::
experimental
::
DataLayout
::
UNDEFINED
)
{
if
(
DesiredLayout
()
==
p
hi
::
DataLayout
::
UNDEFINED
)
{
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
VLOG
(
4
)
<<
"LayoutAutotune was unstarted. Current op :"
<<
op_name
;
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
...
@@ -183,9 +181,8 @@ EagerLayoutAutotune<paddle::experimental::Scalar, bool>(
...
@@ -183,9 +181,8 @@ EagerLayoutAutotune<paddle::experimental::Scalar, bool>(
(
tensors_vector
[
0
][
0
].
layout
()
==
DesiredLayout
())
&&
(
*
keep_dim
))
{
(
tensors_vector
[
0
][
0
].
layout
()
==
DesiredLayout
())
&&
(
*
keep_dim
))
{
std
::
shared_ptr
<
EagerArgmaxOpTransformer
>
argmax_transform
=
nullptr
;
std
::
shared_ptr
<
EagerArgmaxOpTransformer
>
argmax_transform
=
nullptr
;
argmax_transform
=
std
::
make_shared
<
EagerArgmaxOpTransformer
>
(
op_name
);
argmax_transform
=
std
::
make_shared
<
EagerArgmaxOpTransformer
>
(
op_name
);
argmax_transform
->
SetAttr
(
axis
,
argmax_transform
->
SetAttr
(
tensors_vector
[
0
][
0
].
layout
()
==
axis
,
tensors_vector
[
0
][
0
].
layout
()
==
phi
::
DataLayout
::
NHWC
);
paddle
::
experimental
::
DataLayout
::
NHWC
);
return
argmax_transform
;
return
argmax_transform
;
}
}
return
std
::
make_shared
<
EagerLightlyLayoutSensitiveOpTransformer
>
(
op_name
);
return
std
::
make_shared
<
EagerLightlyLayoutSensitiveOpTransformer
>
(
op_name
);
...
@@ -198,7 +195,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune<int, int>(
...
@@ -198,7 +195,7 @@ inline std::shared_ptr<EagerLayoutTransformer> EagerLayoutAutotune<int, int>(
kSlotSmallVectorSize
>&
tensors_vector
,
kSlotSmallVectorSize
>&
tensors_vector
,
int
*
start_axis
,
int
*
start_axis
,
int
*
stop_axis
)
{
int
*
stop_axis
)
{
if
(
DesiredLayout
()
==
p
addle
::
experimental
::
DataLayout
::
UNDEFINED
)
{
if
(
DesiredLayout
()
==
p
hi
::
DataLayout
::
UNDEFINED
)
{
VLOG
(
4
)
<<
"Optimze Layout was not started"
<<
op_name
;
VLOG
(
4
)
<<
"Optimze Layout was not started"
<<
op_name
;
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
...
@@ -221,7 +218,7 @@ EagerLayoutAutotune<paddle::experimental::Scalar>(
...
@@ -221,7 +218,7 @@ EagerLayoutAutotune<paddle::experimental::Scalar>(
const
paddle
::
small_vector
<
std
::
vector
<
paddle
::
experimental
::
Tensor
>
,
const
paddle
::
small_vector
<
std
::
vector
<
paddle
::
experimental
::
Tensor
>
,
kSlotSmallVectorSize
>&
tensors_vector
,
kSlotSmallVectorSize
>&
tensors_vector
,
paddle
::
experimental
::
Scalar
*
axis
)
{
paddle
::
experimental
::
Scalar
*
axis
)
{
if
(
DesiredLayout
()
==
p
addle
::
experimental
::
DataLayout
::
UNDEFINED
)
{
if
(
DesiredLayout
()
==
p
hi
::
DataLayout
::
UNDEFINED
)
{
VLOG
(
4
)
<<
"Optimze Layout was not started"
<<
op_name
;
VLOG
(
4
)
<<
"Optimze Layout was not started"
<<
op_name
;
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
return
std
::
make_shared
<
EagerLayoutTransformer
>
(
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
op_name
,
tensors_vector
,
tensors_vector
[
0
][
0
].
layout
());
...
...
paddle/fluid/eager/eager_layout_transformer.h
浏览文件 @
ec749398
...
@@ -20,17 +20,16 @@
...
@@ -20,17 +20,16 @@
#include "paddle/phi/core/tensor_utils.h"
#include "paddle/phi/core/tensor_utils.h"
namespace
egr
{
namespace
egr
{
inline
paddle
::
experimental
::
Tensor
EagerTraceTransposeOp
(
inline
paddle
::
experimental
::
Tensor
EagerTraceTransposeOp
(
const
paddle
::
experimental
::
DataLayout
layout
,
const
phi
::
DataLayout
layout
,
const
paddle
::
experimental
::
Tensor
&
in
)
{
const
paddle
::
experimental
::
Tensor
&
in
)
{
VLOG
(
4
)
<<
"AutoTune Transpose from "
<<
in
.
layout
()
<<
" to "
<<
layout
VLOG
(
4
)
<<
"AutoTune Transpose from "
<<
in
.
layout
()
<<
" to "
<<
layout
<<
", tensor's dim size is "
<<
in
.
shape
().
size
();
<<
", tensor's dim size is "
<<
in
.
shape
().
size
();
if
(
in
.
shape
().
size
()
!=
4
)
{
if
(
in
.
shape
().
size
()
!=
4
)
{
return
in
;
return
in
;
}
}
std
::
vector
<
int
>
axis
;
std
::
vector
<
int
>
axis
;
if
(
layout
==
p
addle
::
experimental
::
DataLayout
::
NHWC
)
{
if
(
layout
==
p
hi
::
DataLayout
::
NHWC
)
{
axis
=
{
0
,
2
,
3
,
1
};
axis
=
{
0
,
2
,
3
,
1
};
}
else
if
(
layout
==
p
addle
::
experimental
::
DataLayout
::
NCHW
)
{
}
else
if
(
layout
==
p
hi
::
DataLayout
::
NCHW
)
{
axis
=
{
0
,
3
,
1
,
2
};
axis
=
{
0
,
3
,
1
,
2
};
}
else
{
}
else
{
axis
=
{
0
,
1
,
2
,
3
};
axis
=
{
0
,
1
,
2
,
3
};
...
@@ -40,16 +39,16 @@ inline paddle::experimental::Tensor EagerTraceTransposeOp(
...
@@ -40,16 +39,16 @@ inline paddle::experimental::Tensor EagerTraceTransposeOp(
return
out_tensor
;
return
out_tensor
;
}
}
inline
p
addle
::
experimental
::
DataLayout
DesiredLayout
()
{
inline
p
hi
::
DataLayout
DesiredLayout
()
{
return
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
GetDesiredLayout
();
return
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
GetDesiredLayout
();
}
}
inline
p
addle
::
experimental
::
DataLayout
DefaultLayout
()
{
inline
p
hi
::
DataLayout
DefaultLayout
()
{
return
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
GetDefaultLayout
();
return
paddle
::
imperative
::
LayoutAutoTune
::
Instance
().
GetDefaultLayout
();
}
}
inline
void
UpdateLayout
(
paddle
::
experimental
::
Tensor
*
out_tensor
,
inline
void
UpdateLayout
(
paddle
::
experimental
::
Tensor
*
out_tensor
,
const
p
addle
::
experimental
::
DataLayout
layout
)
{
const
p
hi
::
DataLayout
layout
)
{
if
(
out_tensor
->
layout
()
!=
layout
)
{
if
(
out_tensor
->
layout
()
!=
layout
)
{
VLOG
(
4
)
<<
"Update out_tensor's layout from "
<<
out_tensor
->
layout
()
VLOG
(
4
)
<<
"Update out_tensor's layout from "
<<
out_tensor
->
layout
()
<<
" to "
<<
layout
;
<<
" to "
<<
layout
;
...
@@ -60,7 +59,7 @@ inline void UpdateLayout(paddle::experimental::Tensor* out_tensor,
...
@@ -60,7 +59,7 @@ inline void UpdateLayout(paddle::experimental::Tensor* out_tensor,
}
}
inline
void
DealWithShapeOp
(
paddle
::
experimental
::
Tensor
*
out_tensor
,
inline
void
DealWithShapeOp
(
paddle
::
experimental
::
Tensor
*
out_tensor
,
const
p
addle
::
experimental
::
DataLayout
layout
,
const
p
hi
::
DataLayout
layout
,
int
dim_size
)
{
int
dim_size
)
{
auto
des_layout
=
DesiredLayout
();
auto
des_layout
=
DesiredLayout
();
auto
def_layout
=
DefaultLayout
();
auto
def_layout
=
DefaultLayout
();
...
@@ -80,7 +79,7 @@ inline void DealWithShapeOp(paddle::experimental::Tensor* out_tensor,
...
@@ -80,7 +79,7 @@ inline void DealWithShapeOp(paddle::experimental::Tensor* out_tensor,
for
(
int
i
=
0
;
i
<
dim_size
;
i
++
)
{
for
(
int
i
=
0
;
i
<
dim_size
;
i
++
)
{
dims
[
i
]
=
value
[
i
];
dims
[
i
]
=
value
[
i
];
}
}
auto
des_str
=
p
addle
::
framework
::
DataLayoutToString
(
des_layout
);
auto
des_str
=
p
hi
::
DataLayoutToString
(
des_layout
);
if
(
change_dim
&&
des_str
==
"NCHW"
)
{
if
(
change_dim
&&
des_str
==
"NCHW"
)
{
// NCHW -> NHWC
// NCHW -> NHWC
VLOG
(
6
)
<<
"layout autotune get Shape from NCHW -> NHWC "
<<
value
[
0
]
<<
" "
VLOG
(
6
)
<<
"layout autotune get Shape from NCHW -> NHWC "
<<
value
[
0
]
<<
" "
...
@@ -104,7 +103,7 @@ inline void DealWithShapeOp(paddle::experimental::Tensor* out_tensor,
...
@@ -104,7 +103,7 @@ inline void DealWithShapeOp(paddle::experimental::Tensor* out_tensor,
// agnostic op
// agnostic op
class
EagerLayoutTransformer
{
class
EagerLayoutTransformer
{
using
Layout
=
p
addle
::
experimental
::
DataLayout
;
using
Layout
=
p
hi
::
DataLayout
;
public:
public:
EagerLayoutTransformer
()
:
op_name_
(
""
),
final_layout_
(
Layout
::
UNDEFINED
)
{}
EagerLayoutTransformer
()
:
op_name_
(
""
),
final_layout_
(
Layout
::
UNDEFINED
)
{}
...
@@ -204,7 +203,7 @@ class EagerHeavilyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
...
@@ -204,7 +203,7 @@ class EagerHeavilyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
std
::
string
*
layout
)
std
::
string
*
layout
)
:
op_name_
(
op_name
),
desired_layout_
(
DesiredLayout
())
{
:
op_name_
(
op_name
),
desired_layout_
(
DesiredLayout
())
{
VLOG
(
4
)
<<
"Heavily op: "
<<
op_name
;
VLOG
(
4
)
<<
"Heavily op: "
<<
op_name
;
*
layout
=
p
addle
::
framework
::
DataLayoutToString
(
DesiredLayout
());
*
layout
=
p
hi
::
DataLayoutToString
(
DesiredLayout
());
}
}
paddle
::
experimental
::
Tensor
TransInTensor
(
paddle
::
experimental
::
Tensor
TransInTensor
(
...
@@ -242,7 +241,7 @@ class EagerHeavilyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
...
@@ -242,7 +241,7 @@ class EagerHeavilyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
protected:
protected:
std
::
string
op_name_
;
std
::
string
op_name_
;
const
p
addle
::
experimental
::
DataLayout
desired_layout_
;
const
p
hi
::
DataLayout
desired_layout_
;
std
::
unordered_set
<
std
::
string
>
heavily_input_
{
"x"
,
"y"
,
"input"
};
std
::
unordered_set
<
std
::
string
>
heavily_input_
{
"x"
,
"y"
,
"input"
};
};
};
...
@@ -253,18 +252,16 @@ class EagerLightlyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
...
@@ -253,18 +252,16 @@ class EagerLightlyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
const
std
::
string
&
op_name
)
{
const
std
::
string
&
op_name
)
{
VLOG
(
4
)
<<
"Lightly op : "
<<
op_name
;
VLOG
(
4
)
<<
"Lightly op : "
<<
op_name
;
auto
desired_layout
=
DesiredLayout
();
auto
desired_layout
=
DesiredLayout
();
final_layout_
=
p
addle
::
framework
::
DataLayoutToString
(
desired_layout
);
final_layout_
=
p
hi
::
DataLayoutToString
(
desired_layout
);
}
}
// transpose from desired to default
// transpose from desired to default
paddle
::
experimental
::
Tensor
TransInTensor
(
paddle
::
experimental
::
Tensor
TransInTensor
(
const
std
::
string
&
in_name
,
const
paddle
::
experimental
::
Tensor
&
in
)
{
const
std
::
string
&
in_name
,
const
paddle
::
experimental
::
Tensor
&
in
)
{
std
::
string
input_layout
=
std
::
string
input_layout
=
phi
::
DataLayoutToString
(
in
.
layout
());
paddle
::
framework
::
DataLayoutToString
(
in
.
layout
());
auto
default_layout
=
DefaultLayout
();
auto
default_layout
=
DefaultLayout
();
if
(
final_layout_
==
input_layout
&&
in
.
shape
().
size
()
==
4
)
{
if
(
final_layout_
==
input_layout
&&
in
.
shape
().
size
()
==
4
)
{
auto
out_tensor
=
EagerTraceTransposeOp
(
auto
out_tensor
=
EagerTraceTransposeOp
(
phi
::
DataLayout
::
UNDEFINED
,
in
);
paddle
::
experimental
::
DataLayout
::
UNDEFINED
,
in
);
phi
::
DenseTensorUtils
::
GetMutableMeta
(
phi
::
DenseTensorUtils
::
GetMutableMeta
(
static_cast
<
phi
::
DenseTensor
*>
(
out_tensor
.
impl
().
get
()))
static_cast
<
phi
::
DenseTensor
*>
(
out_tensor
.
impl
().
get
()))
->
layout
=
default_layout
;
->
layout
=
default_layout
;
...
@@ -282,8 +279,8 @@ class EagerLightlyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
...
@@ -282,8 +279,8 @@ class EagerLightlyLayoutSensitiveOpTransformer : public EagerLayoutTransformer {
for
(
size_t
i
=
0
;
i
<
in
.
size
();
i
++
)
{
for
(
size_t
i
=
0
;
i
<
in
.
size
();
i
++
)
{
auto
in_tensor
=
in
[
i
];
auto
in_tensor
=
in
[
i
];
if
(
in_tensor
.
layout
()
==
desired_layout
)
{
if
(
in_tensor
.
layout
()
==
desired_layout
)
{
auto
out_tensor
=
EagerTraceTransposeOp
(
auto
out_tensor
=
paddle
::
experimental
::
DataLayout
::
UNDEFINED
,
in_tensor
);
EagerTraceTransposeOp
(
phi
::
DataLayout
::
UNDEFINED
,
in_tensor
);
phi
::
DenseTensorUtils
::
GetMutableMeta
(
phi
::
DenseTensorUtils
::
GetMutableMeta
(
static_cast
<
phi
::
DenseTensor
*>
(
out_tensor
.
impl
().
get
()))
static_cast
<
phi
::
DenseTensor
*>
(
out_tensor
.
impl
().
get
()))
->
layout
=
default_layout
;
->
layout
=
default_layout
;
...
@@ -397,14 +394,12 @@ class EagerConcatOpTransformer
...
@@ -397,14 +394,12 @@ class EagerConcatOpTransformer
VLOG
(
4
)
<<
"AutoTuneTransformer op : "
<<
op_name
;
VLOG
(
4
)
<<
"AutoTuneTransformer op : "
<<
op_name
;
}
}
void
SetAttr
(
paddle
::
experimental
::
Scalar
*
axis
,
void
SetAttr
(
paddle
::
experimental
::
Scalar
*
axis
,
phi
::
DataLayout
layout
)
{
paddle
::
framework
::
DataLayout
layout
)
{
std
::
vector
<
int
>
perm_nhwc
=
{
0
,
3
,
1
,
2
};
std
::
vector
<
int
>
perm_nhwc
=
{
0
,
3
,
1
,
2
};
std
::
vector
<
int
>
perm_nchw
=
{
0
,
2
,
3
,
1
};
std
::
vector
<
int
>
perm_nchw
=
{
0
,
2
,
3
,
1
};
int
axes
=
axis
->
to
<
int
>
();
int
axes
=
axis
->
to
<
int
>
();
axes
=
axes
<
0
?
axes
+
4
:
axes
;
axes
=
axes
<
0
?
axes
+
4
:
axes
;
auto
perm
=
auto
perm
=
(
phi
::
DataLayout
::
NHWC
==
layout
)
?
perm_nhwc
:
perm_nchw
;
(
paddle
::
framework
::
DataLayout
::
NHWC
==
layout
)
?
perm_nhwc
:
perm_nchw
;
(
*
axis
)
=
static_cast
<
paddle
::
experimental
::
Scalar
>
(
perm
[
axes
]);
(
*
axis
)
=
static_cast
<
paddle
::
experimental
::
Scalar
>
(
perm
[
axes
]);
}
}
...
...
paddle/fluid/eager/tests/data_structure_tests/eager_tensor_test.cc
浏览文件 @
ec749398
...
@@ -90,7 +90,7 @@ TEST(Tensor, MemberFunction) {
...
@@ -90,7 +90,7 @@ TEST(Tensor, MemberFunction) {
auto
expected_dim
=
phi
::
make_ddim
({
1
,
2
});
auto
expected_dim
=
phi
::
make_ddim
({
1
,
2
});
CHECK_EQ
(
et3
.
dims
(),
expected_dim
);
CHECK_EQ
(
et3
.
dims
(),
expected_dim
);
CHECK_EQ
(
et3
.
type
(),
paddle
::
experimental
::
DataType
::
FLOAT32
);
CHECK_EQ
(
et3
.
type
(),
paddle
::
experimental
::
DataType
::
FLOAT32
);
CHECK_EQ
(
et3
.
layout
(),
p
addle
::
experimental
::
DataLayout
::
NCHW
);
CHECK_EQ
(
et3
.
layout
(),
p
hi
::
DataLayout
::
NCHW
);
CHECK
(
paddle
::
platform
::
is_cpu_place
(
et3
.
place
()));
CHECK
(
paddle
::
platform
::
is_cpu_place
(
et3
.
place
()));
VLOG
(
6
)
<<
"Get impl"
;
VLOG
(
6
)
<<
"Get impl"
;
auto
*
dt3_ptr
=
auto
*
dt3_ptr
=
...
@@ -202,10 +202,9 @@ TEST(EagerVariable, Constructor) {
...
@@ -202,10 +202,9 @@ TEST(EagerVariable, Constructor) {
TEST
(
EagerVariable
,
DataLayout
)
{
TEST
(
EagerVariable
,
DataLayout
)
{
paddle
::
experimental
::
Tensor
tensor
;
paddle
::
experimental
::
Tensor
tensor
;
phi
::
DenseTensorMeta
meta
=
phi
::
DenseTensorMeta
meta
=
phi
::
DenseTensorMeta
(
phi
::
DataType
::
FLOAT32
,
phi
::
DenseTensorMeta
(
phi
::
DataType
::
FLOAT32
,
phi
::
make_ddim
({
1
,
1
,
1
,
1
}),
phi
::
make_ddim
({
1
,
1
,
1
,
1
}),
phi
::
DataLayout
::
UNDEFINED
);
paddle
::
experimental
::
DataLayout
::
UNDEFINED
);
std
::
shared_ptr
<
phi
::
DenseTensor
>
dt
=
std
::
make_shared
<
phi
::
DenseTensor
>
(
std
::
shared_ptr
<
phi
::
DenseTensor
>
dt
=
std
::
make_shared
<
phi
::
DenseTensor
>
(
std
::
make_unique
<
paddle
::
experimental
::
DefaultAllocator
>
(
std
::
make_unique
<
paddle
::
experimental
::
DefaultAllocator
>
(
paddle
::
platform
::
CPUPlace
())
paddle
::
platform
::
CPUPlace
())
...
@@ -219,11 +218,10 @@ TEST(EagerVariable, DataLayout) {
...
@@ -219,11 +218,10 @@ TEST(EagerVariable, DataLayout) {
tensor
.
set_impl
(
dt
);
tensor
.
set_impl
(
dt
);
auto
eager_var
=
std
::
make_shared
<
egr
::
EagerVariable
>
(
tensor
);
auto
eager_var
=
std
::
make_shared
<
egr
::
EagerVariable
>
(
tensor
);
auto
layout
=
paddle
::
imperative
::
GetDataLayout
(
eager_var
);
auto
layout
=
paddle
::
imperative
::
GetDataLayout
(
eager_var
);
CHECK_EQ
(
layout
,
paddle
::
experimental
::
DataLayout
::
UNDEFINED
);
CHECK_EQ
(
layout
,
phi
::
DataLayout
::
UNDEFINED
);
paddle
::
imperative
::
SetDataLayout
(
eager_var
,
paddle
::
imperative
::
SetDataLayout
(
eager_var
,
phi
::
DataLayout
::
NCHW
);
paddle
::
experimental
::
DataLayout
::
NCHW
);
layout
=
paddle
::
imperative
::
GetDataLayout
(
eager_var
);
layout
=
paddle
::
imperative
::
GetDataLayout
(
eager_var
);
CHECK_EQ
(
layout
,
p
addle
::
experimental
::
DataLayout
::
NCHW
);
CHECK_EQ
(
layout
,
p
hi
::
DataLayout
::
NCHW
);
}
}
TEST
(
VariableCompatTensor
,
MemberFunction
)
{
TEST
(
VariableCompatTensor
,
MemberFunction
)
{
...
...
paddle/fluid/framework/convert_utils.h
浏览文件 @
ec749398
...
@@ -27,7 +27,7 @@ namespace paddle {
...
@@ -27,7 +27,7 @@ namespace paddle {
namespace
framework
{
namespace
framework
{
using
DataType
=
paddle
::
experimental
::
DataType
;
using
DataType
=
paddle
::
experimental
::
DataType
;
using
DataLayout
=
p
addle
::
experimental
::
DataLayout
;
using
DataLayout
=
p
hi
::
DataLayout
;
DataType
TransToPhiDataType
(
DataType
TransToPhiDataType
(
const
paddle
::
framework
::
proto
::
VarType
::
Type
&
dtype
);
const
paddle
::
framework
::
proto
::
VarType
::
Type
&
dtype
);
...
...
paddle/fluid/framework/data_layout_transform.h
浏览文件 @
ec749398
...
@@ -67,7 +67,7 @@ inline MKLDNNMemoryFormat ToMKLDNNFormat(const DataLayout& layout) {
...
@@ -67,7 +67,7 @@ inline MKLDNNMemoryFormat ToMKLDNNFormat(const DataLayout& layout) {
default:
default:
PADDLE_THROW
(
platform
::
errors
::
InvalidArgument
(
PADDLE_THROW
(
platform
::
errors
::
InvalidArgument
(
"Fail to convert layout %s to MKLDNN format."
,
"Fail to convert layout %s to MKLDNN format."
,
DataLayoutToString
(
layout
)));
phi
::
DataLayoutToString
(
layout
)));
}
}
}
}
...
...
paddle/fluid/framework/data_layout_transform_test.cc
浏览文件 @
ec749398
...
@@ -21,27 +21,27 @@ TEST(DataTransform, DataLayoutFunction) {
...
@@ -21,27 +21,27 @@ TEST(DataTransform, DataLayoutFunction) {
phi
::
DenseTensor
in
=
phi
::
DenseTensor
();
phi
::
DenseTensor
in
=
phi
::
DenseTensor
();
phi
::
DenseTensor
out
=
phi
::
DenseTensor
();
phi
::
DenseTensor
out
=
phi
::
DenseTensor
();
in
.
mutable_data
<
double
>
(
phi
::
make_ddim
({
2
,
3
,
1
,
2
}),
place
);
in
.
mutable_data
<
double
>
(
phi
::
make_ddim
({
2
,
3
,
1
,
2
}),
place
);
in
.
set_layout
(
p
addle
::
framework
::
DataLayout
::
kNHWC
);
in
.
set_layout
(
p
hi
::
DataLayout
::
kNHWC
);
auto
kernel_nhwc
=
auto
kernel_nhwc
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kNHWC
,
p
hi
::
DataLayout
::
kNHWC
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_ncwh
=
auto
kernel_ncwh
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kNCHW
,
p
hi
::
DataLayout
::
kNCHW
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
TransDataLayout
(
kernel_nhwc
,
kernel_ncwh
,
in
,
&
out
);
paddle
::
framework
::
TransDataLayout
(
kernel_nhwc
,
kernel_ncwh
,
in
,
&
out
);
EXPECT_TRUE
(
out
.
layout
()
==
p
addle
::
framework
::
DataLayout
::
kNCHW
);
EXPECT_TRUE
(
out
.
layout
()
==
p
hi
::
DataLayout
::
kNCHW
);
EXPECT_TRUE
(
out
.
dims
()
==
phi
::
make_ddim
({
2
,
2
,
3
,
1
}));
EXPECT_TRUE
(
out
.
dims
()
==
phi
::
make_ddim
({
2
,
2
,
3
,
1
}));
TransDataLayout
(
kernel_ncwh
,
kernel_nhwc
,
in
,
&
out
);
TransDataLayout
(
kernel_ncwh
,
kernel_nhwc
,
in
,
&
out
);
EXPECT_TRUE
(
in
.
layout
()
==
p
addle
::
framework
::
DataLayout
::
kNHWC
);
EXPECT_TRUE
(
in
.
layout
()
==
p
hi
::
DataLayout
::
kNHWC
);
EXPECT_TRUE
(
in
.
dims
()
==
phi
::
make_ddim
({
2
,
3
,
1
,
2
}));
EXPECT_TRUE
(
in
.
dims
()
==
phi
::
make_ddim
({
2
,
3
,
1
,
2
}));
}
}
...
...
paddle/fluid/framework/data_type_transform_test.cc
浏览文件 @
ec749398
...
@@ -22,43 +22,43 @@ TEST(DataTypeTransform, CPUTransform) {
...
@@ -22,43 +22,43 @@ TEST(DataTypeTransform, CPUTransform) {
auto
kernel_fp16
=
auto
kernel_fp16
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP16
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP16
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_bf16
=
auto
kernel_bf16
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
BF16
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
BF16
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_fp32
=
auto
kernel_fp32
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_fp64
=
auto
kernel_fp64
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP64
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP64
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_int32
=
auto
kernel_int32
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT32
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT32
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_int64
=
auto
kernel_int64
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT64
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT64
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_bool
=
auto
kernel_bool
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
BOOL
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
BOOL
,
place
,
place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
// data type transform from float32
// data type transform from float32
...
...
paddle/fluid/framework/data_type_transform_test.cu
浏览文件 @
ec749398
...
@@ -27,37 +27,37 @@ TEST(DataTypeTransform, GPUTransform) {
...
@@ -27,37 +27,37 @@ TEST(DataTypeTransform, GPUTransform) {
auto
kernel_fp16
=
auto
kernel_fp16
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP16
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP16
,
gpu_place
,
gpu_place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_fp32
=
auto
kernel_fp32
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
gpu_place
,
gpu_place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_fp64
=
auto
kernel_fp64
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP64
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
FP64
,
gpu_place
,
gpu_place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_int32
=
auto
kernel_int32
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT32
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT32
,
gpu_place
,
gpu_place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_int64
=
auto
kernel_int64
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT64
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
INT64
,
gpu_place
,
gpu_place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
auto
kernel_bool
=
auto
kernel_bool
=
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
BOOL
,
paddle
::
framework
::
OpKernelType
(
paddle
::
framework
::
proto
::
VarType
::
BOOL
,
gpu_place
,
gpu_place
,
p
addle
::
framework
::
DataLayout
::
kAnyLayout
,
p
hi
::
DataLayout
::
kAnyLayout
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
// data type transform from float32
// data type transform from float32
...
...
paddle/fluid/framework/details/fetch_async_op_handle.cc
浏览文件 @
ec749398
...
@@ -78,8 +78,8 @@ static void CheckTensorAttrs(const LoDTensor *tensor,
...
@@ -78,8 +78,8 @@ static void CheckTensorAttrs(const LoDTensor *tensor,
"(th) fetched variable. Please set the "
"(th) fetched variable. Please set the "
"parameter `return_merged = False` when you "
"parameter `return_merged = False` when you "
"call the `Executor.run()` method."
,
"call the `Executor.run()` method."
,
DataLayoutToString
(
layout
),
phi
::
DataLayoutToString
(
layout
),
DataLayoutToString
(
tensor
->
layout
()),
phi
::
DataLayoutToString
(
tensor
->
layout
()),
offset
));
offset
));
}
}
...
@@ -149,7 +149,7 @@ void FetchAsyncOpHandle::FetchMergedLodTensor(
...
@@ -149,7 +149,7 @@ void FetchAsyncOpHandle::FetchMergedLodTensor(
LoDTensor
*
dst_lodtensor
)
{
LoDTensor
*
dst_lodtensor
)
{
// calc dst type,layout,dim,lod and calc check dim
// calc dst type,layout,dim,lod and calc check dim
proto
::
VarType
::
Type
new_type
=
proto
::
VarType
::
FP32
;
proto
::
VarType
::
Type
new_type
=
proto
::
VarType
::
FP32
;
framework
::
DataLayout
new_layout
;
phi
::
DataLayout
new_layout
;
framework
::
DDim
new_dim
;
framework
::
DDim
new_dim
;
LoD
new_lod
=
src_lodtensors
[
0
]
->
lod
();
LoD
new_lod
=
src_lodtensors
[
0
]
->
lod
();
...
...
paddle/fluid/framework/lod_tensor.cc
浏览文件 @
ec749398
...
@@ -348,7 +348,7 @@ void MergeLoDTensor(LoDTensor *target,
...
@@ -348,7 +348,7 @@ void MergeLoDTensor(LoDTensor *target,
framework
::
DDim
new_dim
=
lod_tensors
[
0
]
->
dims
();
framework
::
DDim
new_dim
=
lod_tensors
[
0
]
->
dims
();
proto
::
VarType
::
Type
new_type
=
proto
::
VarType
::
FP32
;
proto
::
VarType
::
Type
new_type
=
proto
::
VarType
::
FP32
;
framework
::
DataLayout
new_layout
=
lod_tensors
[
0
]
->
layout
();
phi
::
DataLayout
new_layout
=
lod_tensors
[
0
]
->
layout
();
for
(
auto
*
t
:
lod_tensors
)
{
for
(
auto
*
t
:
lod_tensors
)
{
if
(
t
->
numel
()
&&
t
->
IsInitialized
())
{
if
(
t
->
numel
()
&&
t
->
IsInitialized
())
{
new_dim
=
t
->
dims
();
new_dim
=
t
->
dims
();
...
@@ -378,8 +378,8 @@ void MergeLoDTensor(LoDTensor *target,
...
@@ -378,8 +378,8 @@ void MergeLoDTensor(LoDTensor *target,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"LoDTensor layout does not match, expected layout is %s, "
"LoDTensor layout does not match, expected layout is %s, "
"actual layout is %s."
,
"actual layout is %s."
,
DataLayoutToString
(
new_layout
),
phi
::
DataLayoutToString
(
new_layout
),
DataLayoutToString
(
t
->
layout
())));
phi
::
DataLayoutToString
(
t
->
layout
())));
auto
tensor_dims
=
t
->
dims
();
auto
tensor_dims
=
t
->
dims
();
PADDLE_ENFORCE_EQ
(
tensor_dims
.
size
(),
PADDLE_ENFORCE_EQ
(
tensor_dims
.
size
(),
new_dim
.
size
(),
new_dim
.
size
(),
...
...
paddle/fluid/framework/new_executor/data_transfer.cc
浏览文件 @
ec749398
...
@@ -223,14 +223,14 @@ std::shared_ptr<OperatorBase> TransferLayout(const std::string& var_name,
...
@@ -223,14 +223,14 @@ std::shared_ptr<OperatorBase> TransferLayout(const std::string& var_name,
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
// NOTE(zhiqiu): hot fix, follow the same logic in DataCopy() in fetch_op.cc
// NOTE(zhiqiu): hot fix, follow the same logic in DataCopy() in fetch_op.cc
if
(
in_layout
==
framework
::
DataLayout
::
kMKLDNN
&&
if
(
in_layout
==
phi
::
DataLayout
::
kMKLDNN
&&
var_name
==
framework
::
GradVarName
(
"Filter"
)
&&
is_fetch_v2
)
{
var_name
==
framework
::
GradVarName
(
"Filter"
)
&&
is_fetch_v2
)
{
VLOG
(
4
)
<<
"Match special case(Filter && fetch_v2) "
<<
var_name
;
VLOG
(
4
)
<<
"Match special case(Filter && fetch_v2) "
<<
var_name
;
out_layout
=
framework
::
DataLayout
::
kNCHW
;
out_layout
=
phi
::
DataLayout
::
kNCHW
;
}
}
if
(
in_layout
==
framework
::
DataLayout
::
ONEDNN
&&
if
(
in_layout
==
phi
::
DataLayout
::
ONEDNN
&&
out_layout
!=
framework
::
DataLayout
::
ONEDNN
)
{
out_layout
!=
phi
::
DataLayout
::
ONEDNN
)
{
auto
target_layout
=
phi
::
OneDNNContext
::
tls
().
get_cur_paddle_data_layout
();
auto
target_layout
=
phi
::
OneDNNContext
::
tls
().
get_cur_paddle_data_layout
();
VLOG
(
4
)
<<
"TransDataLayoutFromOneDNN: "
<<
in_layout
<<
"->"
VLOG
(
4
)
<<
"TransDataLayoutFromOneDNN: "
<<
in_layout
<<
"->"
<<
target_layout
;
<<
target_layout
;
...
...
paddle/fluid/framework/new_executor/new_executor_defs.cc
浏览文件 @
ec749398
...
@@ -337,7 +337,7 @@ bool InterpretercoreInferShapeContext::IsRunMKLDNNKernel() const {
...
@@ -337,7 +337,7 @@ bool InterpretercoreInferShapeContext::IsRunMKLDNNKernel() const {
auto
&
op_with_kernel
=
dynamic_cast
<
const
OperatorWithKernel
&>
(
op_
);
auto
&
op_with_kernel
=
dynamic_cast
<
const
OperatorWithKernel
&>
(
op_
);
return
((
op_with_kernel
.
kernel_type
())
&&
return
((
op_with_kernel
.
kernel_type
())
&&
(
op_with_kernel
.
kernel_type
()
->
data_layout_
==
(
op_with_kernel
.
kernel_type
()
->
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
));
phi
::
DataLayout
::
kMKLDNN
));
}
catch
(
std
::
bad_cast
&
exp
)
{
}
catch
(
std
::
bad_cast
&
exp
)
{
return
false
;
return
false
;
}
}
...
...
paddle/fluid/framework/op_kernel_type.h
浏览文件 @
ec749398
...
@@ -25,6 +25,8 @@ limitations under the License. */
...
@@ -25,6 +25,8 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
framework
{
namespace
framework
{
using
DataLayout
=
phi
::
DataLayout
;
class
OpKernelType
{
class
OpKernelType
{
public:
public:
constexpr
static
int
kDefaultCustomizedTypeValue
=
0
;
constexpr
static
int
kDefaultCustomizedTypeValue
=
0
;
...
...
paddle/fluid/framework/op_kernel_type_test.cc
浏览文件 @
ec749398
...
@@ -20,7 +20,7 @@ TEST(OpKernelType, ToString) {
...
@@ -20,7 +20,7 @@ TEST(OpKernelType, ToString) {
using
OpKernelType
=
paddle
::
framework
::
OpKernelType
;
using
OpKernelType
=
paddle
::
framework
::
OpKernelType
;
using
DataType
=
paddle
::
framework
::
proto
::
VarType
;
using
DataType
=
paddle
::
framework
::
proto
::
VarType
;
using
CPUPlace
=
paddle
::
platform
::
CPUPlace
;
using
CPUPlace
=
paddle
::
platform
::
CPUPlace
;
using
DataLayout
=
p
addle
::
framework
::
DataLayout
;
using
DataLayout
=
p
hi
::
DataLayout
;
using
LibraryType
=
paddle
::
framework
::
LibraryType
;
using
LibraryType
=
paddle
::
framework
::
LibraryType
;
OpKernelType
op_kernel_type
(
OpKernelType
op_kernel_type
(
...
@@ -43,7 +43,7 @@ TEST(OpKernelType, Hash) {
...
@@ -43,7 +43,7 @@ TEST(OpKernelType, Hash) {
using
DataType
=
paddle
::
framework
::
proto
::
VarType
;
using
DataType
=
paddle
::
framework
::
proto
::
VarType
;
using
CPUPlace
=
paddle
::
platform
::
CPUPlace
;
using
CPUPlace
=
paddle
::
platform
::
CPUPlace
;
using
CUDAPlace
=
paddle
::
platform
::
CUDAPlace
;
using
CUDAPlace
=
paddle
::
platform
::
CUDAPlace
;
using
DataLayout
=
p
addle
::
framework
::
DataLayout
;
using
DataLayout
=
p
hi
::
DataLayout
;
using
LibraryType
=
paddle
::
framework
::
LibraryType
;
using
LibraryType
=
paddle
::
framework
::
LibraryType
;
OpKernelType
op_kernel_type_1
(
OpKernelType
op_kernel_type_1
(
...
...
paddle/fluid/framework/op_registry.h
浏览文件 @
ec749398
...
@@ -175,7 +175,7 @@ inline void RegisterKernelClass(const char* op_type,
...
@@ -175,7 +175,7 @@ inline void RegisterKernelClass(const char* op_type,
if
(
std
::
is_same
<
PlaceType
,
platform
::
CustomPlace
>::
value
)
{
if
(
std
::
is_same
<
PlaceType
,
platform
::
CustomPlace
>::
value
)
{
OpKernelType
key
(
ToDataType
(
std
::
type_index
(
typeid
(
T
))),
OpKernelType
key
(
ToDataType
(
std
::
type_index
(
typeid
(
T
))),
platform
::
CustomPlace
(
library_type
),
platform
::
CustomPlace
(
library_type
),
StringToDataLayout
(
data_layout
),
phi
::
StringToDataLayout
(
data_layout
),
LibraryType
::
kPlain
,
LibraryType
::
kPlain
,
customized_type_value
);
customized_type_value
);
OperatorWithKernel
::
AllOpKernels
()[
op_type
][
key
]
=
func
;
OperatorWithKernel
::
AllOpKernels
()[
op_type
][
key
]
=
func
;
...
@@ -184,7 +184,7 @@ inline void RegisterKernelClass(const char* op_type,
...
@@ -184,7 +184,7 @@ inline void RegisterKernelClass(const char* op_type,
#endif
#endif
OpKernelType
key
(
ToDataType
(
std
::
type_index
(
typeid
(
T
))),
OpKernelType
key
(
ToDataType
(
std
::
type_index
(
typeid
(
T
))),
PlaceType
(),
PlaceType
(),
StringToDataLayout
(
data_layout
),
phi
::
StringToDataLayout
(
data_layout
),
StringToLibraryType
(
library_type
),
StringToLibraryType
(
library_type
),
customized_type_value
);
customized_type_value
);
OperatorWithKernel
::
AllOpKernels
()[
op_type
][
key
]
=
func
;
OperatorWithKernel
::
AllOpKernels
()[
op_type
][
key
]
=
func
;
...
...
paddle/fluid/framework/operator.cc
浏览文件 @
ec749398
...
@@ -997,7 +997,7 @@ class RuntimeInferShapeContext : public InferShapeContext {
...
@@ -997,7 +997,7 @@ class RuntimeInferShapeContext : public InferShapeContext {
auto
&
op_with_kernel
=
dynamic_cast
<
const
OperatorWithKernel
&>
(
op_
);
auto
&
op_with_kernel
=
dynamic_cast
<
const
OperatorWithKernel
&>
(
op_
);
return
((
op_with_kernel
.
kernel_type
())
&&
return
((
op_with_kernel
.
kernel_type
())
&&
(
op_with_kernel
.
kernel_type
()
->
data_layout_
==
(
op_with_kernel
.
kernel_type
()
->
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
));
phi
::
DataLayout
::
kMKLDNN
));
}
catch
(
const
std
::
bad_cast
&
exp
)
{
}
catch
(
const
std
::
bad_cast
&
exp
)
{
return
false
;
return
false
;
}
}
...
...
paddle/fluid/framework/operator_test.cc
浏览文件 @
ec749398
...
@@ -132,7 +132,7 @@ class OpWithKernelTest : public OperatorWithKernel {
...
@@ -132,7 +132,7 @@ class OpWithKernelTest : public OperatorWithKernel {
int
sub_type
=
ctx
.
Attr
<
int
>
(
"kernel_sub_type"
);
int
sub_type
=
ctx
.
Attr
<
int
>
(
"kernel_sub_type"
);
return
OpKernelType
(
proto
::
VarType
::
FP32
,
return
OpKernelType
(
proto
::
VarType
::
FP32
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
framework
::
LibraryType
::
kPlain
,
framework
::
LibraryType
::
kPlain
,
sub_type
);
sub_type
);
}
}
...
@@ -655,9 +655,8 @@ class OpUnusedVarTest : public OperatorWithKernel {
...
@@ -655,9 +655,8 @@ class OpUnusedVarTest : public OperatorWithKernel {
void
InferShape
(
framework
::
InferShapeContext
*
ctx
)
const
override
{}
void
InferShape
(
framework
::
InferShapeContext
*
ctx
)
const
override
{}
OpKernelType
GetExpectedKernelType
(
OpKernelType
GetExpectedKernelType
(
const
ExecutionContext
&
ctx
)
const
override
{
const
ExecutionContext
&
ctx
)
const
override
{
return
OpKernelType
(
proto
::
VarType
::
FP32
,
return
OpKernelType
(
ctx
.
GetPlace
(),
proto
::
VarType
::
FP32
,
ctx
.
GetPlace
(),
phi
::
DataLayout
::
kAnyLayout
);
framework
::
DataLayout
::
kAnyLayout
);
}
}
};
};
...
...
paddle/fluid/framework/phi_utils_test.cc
浏览文件 @
ec749398
...
@@ -25,7 +25,7 @@ TEST(PhiUtils, TransPhiKernelKeyToOpKernelType) {
...
@@ -25,7 +25,7 @@ TEST(PhiUtils, TransPhiKernelKeyToOpKernelType) {
auto
op_kernel_type
=
auto
op_kernel_type
=
paddle
::
framework
::
TransPhiKernelKeyToOpKernelType
(
kernel_key
);
paddle
::
framework
::
TransPhiKernelKeyToOpKernelType
(
kernel_key
);
ASSERT_EQ
(
op_kernel_type
.
data_type_
,
paddle
::
framework
::
proto
::
VarType
::
FP32
);
ASSERT_EQ
(
op_kernel_type
.
data_type_
,
paddle
::
framework
::
proto
::
VarType
::
FP32
);
ASSERT_EQ
(
op_kernel_type
.
data_layout_
,
p
addle
::
framework
::
DataLayout
::
kNCHW
);
ASSERT_EQ
(
op_kernel_type
.
data_layout_
,
p
hi
::
DataLayout
::
kNCHW
);
ASSERT_TRUE
(
paddle
::
platform
::
is_cpu_place
(
op_kernel_type
.
place_
));
ASSERT_TRUE
(
paddle
::
platform
::
is_cpu_place
(
op_kernel_type
.
place_
));
ASSERT_EQ
(
op_kernel_type
.
library_type_
,
ASSERT_EQ
(
op_kernel_type
.
library_type_
,
paddle
::
framework
::
LibraryType
::
kPlain
);
paddle
::
framework
::
LibraryType
::
kPlain
);
...
@@ -36,7 +36,7 @@ TEST(PhiUtils, TransPhiKernelKeyToOpKernelType) {
...
@@ -36,7 +36,7 @@ TEST(PhiUtils, TransPhiKernelKeyToOpKernelType) {
op_kernel_type
=
op_kernel_type
=
paddle
::
framework
::
TransPhiKernelKeyToOpKernelType
(
kernel_key_mkldnn
);
paddle
::
framework
::
TransPhiKernelKeyToOpKernelType
(
kernel_key_mkldnn
);
ASSERT_EQ
(
op_kernel_type
.
data_type_
,
paddle
::
framework
::
proto
::
VarType
::
FP32
);
ASSERT_EQ
(
op_kernel_type
.
data_type_
,
paddle
::
framework
::
proto
::
VarType
::
FP32
);
ASSERT_EQ
(
op_kernel_type
.
data_layout_
,
p
addle
::
framework
::
DataLayout
::
kNCHW
);
ASSERT_EQ
(
op_kernel_type
.
data_layout_
,
p
hi
::
DataLayout
::
kNCHW
);
ASSERT_TRUE
(
paddle
::
platform
::
is_cpu_place
(
op_kernel_type
.
place_
));
ASSERT_TRUE
(
paddle
::
platform
::
is_cpu_place
(
op_kernel_type
.
place_
));
ASSERT_EQ
(
op_kernel_type
.
library_type_
,
ASSERT_EQ
(
op_kernel_type
.
library_type_
,
paddle
::
framework
::
LibraryType
::
kMKLDNN
);
paddle
::
framework
::
LibraryType
::
kMKLDNN
);
...
@@ -48,7 +48,7 @@ TEST(PhiUtils, TransPhiKernelKeyToOpKernelType) {
...
@@ -48,7 +48,7 @@ TEST(PhiUtils, TransPhiKernelKeyToOpKernelType) {
op_kernel_type
=
op_kernel_type
=
paddle
::
framework
::
TransPhiKernelKeyToOpKernelType
(
kernel_key_cudnn
);
paddle
::
framework
::
TransPhiKernelKeyToOpKernelType
(
kernel_key_cudnn
);
ASSERT_EQ
(
op_kernel_type
.
data_type_
,
paddle
::
framework
::
proto
::
VarType
::
FP32
);
ASSERT_EQ
(
op_kernel_type
.
data_type_
,
paddle
::
framework
::
proto
::
VarType
::
FP32
);
ASSERT_EQ
(
op_kernel_type
.
data_layout_
,
p
addle
::
framework
::
DataLayout
::
kNCHW
);
ASSERT_EQ
(
op_kernel_type
.
data_layout_
,
p
hi
::
DataLayout
::
kNCHW
);
ASSERT_TRUE
(
paddle
::
platform
::
is_gpu_place
(
op_kernel_type
.
place_
));
ASSERT_TRUE
(
paddle
::
platform
::
is_gpu_place
(
op_kernel_type
.
place_
));
ASSERT_EQ
(
op_kernel_type
.
library_type_
,
ASSERT_EQ
(
op_kernel_type
.
library_type_
,
paddle
::
framework
::
LibraryType
::
kCUDNN
);
paddle
::
framework
::
LibraryType
::
kCUDNN
);
...
@@ -59,7 +59,7 @@ TEST(PhiUtils, TransOpKernelTypeToPhiKernelKey) {
...
@@ -59,7 +59,7 @@ TEST(PhiUtils, TransOpKernelTypeToPhiKernelKey) {
paddle
::
framework
::
OpKernelType
op_kernel_type
(
paddle
::
framework
::
OpKernelType
op_kernel_type
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
platform
::
CPUPlace
(),
paddle
::
platform
::
CPUPlace
(),
p
addle
::
framework
::
DataLayout
::
kNCHW
);
p
hi
::
DataLayout
::
kNCHW
);
auto
kernel_key
=
auto
kernel_key
=
paddle
::
framework
::
TransOpKernelTypeToPhiKernelKey
(
op_kernel_type
);
paddle
::
framework
::
TransOpKernelTypeToPhiKernelKey
(
op_kernel_type
);
ASSERT_EQ
(
kernel_key
.
dtype
(),
phi
::
DataType
::
FLOAT32
);
ASSERT_EQ
(
kernel_key
.
dtype
(),
phi
::
DataType
::
FLOAT32
);
...
@@ -70,7 +70,7 @@ TEST(PhiUtils, TransOpKernelTypeToPhiKernelKey) {
...
@@ -70,7 +70,7 @@ TEST(PhiUtils, TransOpKernelTypeToPhiKernelKey) {
paddle
::
framework
::
OpKernelType
op_kernel_type_mkldnn
(
paddle
::
framework
::
OpKernelType
op_kernel_type_mkldnn
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
platform
::
CPUPlace
(),
paddle
::
platform
::
CPUPlace
(),
p
addle
::
framework
::
DataLayout
::
kMKLDNN
,
p
hi
::
DataLayout
::
kMKLDNN
,
paddle
::
framework
::
LibraryType
::
kMKLDNN
);
paddle
::
framework
::
LibraryType
::
kMKLDNN
);
auto
kernel_key_mkldnn
=
auto
kernel_key_mkldnn
=
paddle
::
framework
::
TransOpKernelTypeToPhiKernelKey
(
op_kernel_type_mkldnn
);
paddle
::
framework
::
TransOpKernelTypeToPhiKernelKey
(
op_kernel_type_mkldnn
);
...
@@ -83,7 +83,7 @@ TEST(PhiUtils, TransOpKernelTypeToPhiKernelKey) {
...
@@ -83,7 +83,7 @@ TEST(PhiUtils, TransOpKernelTypeToPhiKernelKey) {
paddle
::
framework
::
OpKernelType
op_kernel_type_cudnn
(
paddle
::
framework
::
OpKernelType
op_kernel_type_cudnn
(
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
framework
::
proto
::
VarType
::
FP32
,
paddle
::
platform
::
CPUPlace
(),
paddle
::
platform
::
CPUPlace
(),
p
addle
::
framework
::
DataLayout
::
kNCHW
,
p
hi
::
DataLayout
::
kNCHW
,
paddle
::
framework
::
LibraryType
::
kCUDNN
);
paddle
::
framework
::
LibraryType
::
kCUDNN
);
auto
kernel_key_cudnn
=
auto
kernel_key_cudnn
=
paddle
::
framework
::
TransOpKernelTypeToPhiKernelKey
(
op_kernel_type_cudnn
);
paddle
::
framework
::
TransOpKernelTypeToPhiKernelKey
(
op_kernel_type_cudnn
);
...
...
paddle/fluid/framework/tensor_test.cc
浏览文件 @
ec749398
...
@@ -313,9 +313,9 @@ TEST(DenseTensor, ReshapeToMatrix) {
...
@@ -313,9 +313,9 @@ TEST(DenseTensor, ReshapeToMatrix) {
TEST
(
DenseTensor
,
Layout
)
{
TEST
(
DenseTensor
,
Layout
)
{
phi
::
DenseTensor
src
;
phi
::
DenseTensor
src
;
ASSERT_EQ
(
src
.
layout
(),
framework
::
DataLayout
::
kNCHW
);
ASSERT_EQ
(
src
.
layout
(),
phi
::
DataLayout
::
kNCHW
);
src
.
set_layout
(
framework
::
DataLayout
::
kAnyLayout
);
src
.
set_layout
(
phi
::
DataLayout
::
kAnyLayout
);
ASSERT_EQ
(
src
.
layout
(),
framework
::
DataLayout
::
kAnyLayout
);
ASSERT_EQ
(
src
.
layout
(),
phi
::
DataLayout
::
kAnyLayout
);
}
}
TEST
(
DenseTensor
,
FP16
)
{
TEST
(
DenseTensor
,
FP16
)
{
...
...
paddle/fluid/framework/tensor_util.cc
浏览文件 @
ec749398
...
@@ -1201,8 +1201,7 @@ std::ostream& operator<<(std::ostream& os, const phi::DenseTensor& t) {
...
@@ -1201,8 +1201,7 @@ std::ostream& operator<<(std::ostream& os, const phi::DenseTensor& t) {
os
<<
" - place: "
<<
t
.
place
()
<<
"
\n
"
;
os
<<
" - place: "
<<
t
.
place
()
<<
"
\n
"
;
os
<<
" - shape: ["
<<
t
.
dims
()
<<
"]
\n
"
;
os
<<
" - shape: ["
<<
t
.
dims
()
<<
"]
\n
"
;
os
<<
" - layout: "
<<
paddle
::
framework
::
DataLayoutToString
(
t
.
layout
())
os
<<
" - layout: "
<<
phi
::
DataLayoutToString
(
t
.
layout
())
<<
"
\n
"
;
<<
"
\n
"
;
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
os
<<
" - format: "
os
<<
" - format: "
...
...
paddle/fluid/imperative/infer_shape_context.h
浏览文件 @
ec749398
...
@@ -251,7 +251,7 @@ class DygraphInferShapeContext : public framework::InferShapeContext {
...
@@ -251,7 +251,7 @@ class DygraphInferShapeContext : public framework::InferShapeContext {
bool
IsRunMKLDNNKernel
()
const
override
{
bool
IsRunMKLDNNKernel
()
const
override
{
return
(
op_kernel_type_
&&
return
(
op_kernel_type_
&&
(
op_kernel_type_
->
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
));
(
op_kernel_type_
->
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
));
}
}
paddle
::
small_vector
<
framework
::
InferShapeVarPtr
,
phi
::
kInputSmallVectorSize
>
paddle
::
small_vector
<
framework
::
InferShapeVarPtr
,
phi
::
kInputSmallVectorSize
>
...
...
paddle/fluid/imperative/layer.h
浏览文件 @
ec749398
...
@@ -211,13 +211,11 @@ class VarBase {
...
@@ -211,13 +211,11 @@ class VarBase {
framework
::
proto
::
VarType
::
Type
DataType
()
const
{
return
var_
->
DataType
();
}
framework
::
proto
::
VarType
::
Type
DataType
()
const
{
return
var_
->
DataType
();
}
void
SetDataLayout
(
p
addle
::
experimental
::
DataLayout
data_layout
)
{
void
SetDataLayout
(
p
hi
::
DataLayout
data_layout
)
{
var_
->
SetDataLayout
(
data_layout
);
var_
->
SetDataLayout
(
data_layout
);
}
}
paddle
::
experimental
::
DataLayout
DataLayout
()
const
{
phi
::
DataLayout
DataLayout
()
const
{
return
var_
->
DataLayout
();
}
return
var_
->
DataLayout
();
}
size_t
ElementSize
()
const
{
return
framework
::
SizeOfType
(
var_
->
DataType
());
}
size_t
ElementSize
()
const
{
return
framework
::
SizeOfType
(
var_
->
DataType
());
}
...
...
paddle/fluid/imperative/layout_autotune.cc
浏览文件 @
ec749398
...
@@ -205,7 +205,7 @@ paddle::imperative::NameVarMap<VarType> AutoTuneLayout(
...
@@ -205,7 +205,7 @@ paddle::imperative::NameVarMap<VarType> AutoTuneLayout(
VLOG
(
3
)
<<
"Tune the layout from "
VLOG
(
3
)
<<
"Tune the layout from "
<<
PADDLE_GET_CONST
(
std
::
string
,
(
*
attrs
)[
"data_format"
])
<<
PADDLE_GET_CONST
(
std
::
string
,
(
*
attrs
)[
"data_format"
])
<<
" to "
<<
" to "
<<
p
addle
::
framework
::
DataLayoutToString
(
<<
p
hi
::
DataLayoutToString
(
LayoutAutoTune
::
Instance
().
GetDesiredLayout
());
LayoutAutoTune
::
Instance
().
GetDesiredLayout
());
}
}
}
}
...
...
paddle/fluid/imperative/layout_autotune.h
浏览文件 @
ec749398
...
@@ -26,7 +26,7 @@ namespace imperative {
...
@@ -26,7 +26,7 @@ namespace imperative {
class
Tracer
;
class
Tracer
;
using
DataLayout
=
p
addle
::
experimental
::
DataLayout
;
using
DataLayout
=
p
hi
::
DataLayout
;
class
LayoutAutoTune
{
class
LayoutAutoTune
{
public:
public:
...
...
paddle/fluid/imperative/layout_transformer.h
浏览文件 @
ec749398
...
@@ -24,7 +24,7 @@ namespace paddle {
...
@@ -24,7 +24,7 @@ namespace paddle {
namespace
imperative
{
namespace
imperative
{
template
<
typename
VarType
>
template
<
typename
VarType
>
void
SetOutDataLayout
(
std
::
shared_ptr
<
VarType
>
var
,
void
SetOutDataLayout
(
std
::
shared_ptr
<
VarType
>
var
,
const
p
addle
::
experimental
::
DataLayout
layout
)
{
const
p
hi
::
DataLayout
layout
)
{
if
(
var
!=
nullptr
&&
var
->
Var
().
IsInitialized
())
{
if
(
var
!=
nullptr
&&
var
->
Var
().
IsInitialized
())
{
paddle
::
imperative
::
SetDataLayout
(
var
,
layout
);
paddle
::
imperative
::
SetDataLayout
(
var
,
layout
);
// set out_tensor's layout
// set out_tensor's layout
...
@@ -61,12 +61,10 @@ std::shared_ptr<VarType> TraceTransposeOp(
...
@@ -61,12 +61,10 @@ std::shared_ptr<VarType> TraceTransposeOp(
tracer
->
TraceOp
(
"transpose2"
,
ins
,
outs
,
std
::
move
(
attrs
));
tracer
->
TraceOp
(
"transpose2"
,
ins
,
outs
,
std
::
move
(
attrs
));
paddle
::
imperative
::
SetDataLayout
(
out
,
layout
);
paddle
::
imperative
::
SetDataLayout
(
out
,
layout
);
VLOG
(
4
)
<<
"Transpose "
<<
paddle
::
imperative
::
GetNameFromVar
(
var
)
<<
"["
VLOG
(
4
)
<<
"Transpose "
<<
paddle
::
imperative
::
GetNameFromVar
(
var
)
<<
"["
<<
paddle
::
framework
::
DataLayoutToString
(
<<
phi
::
DataLayoutToString
(
paddle
::
imperative
::
GetDataLayout
(
var
))
paddle
::
imperative
::
GetDataLayout
(
var
))
<<
"]"
<<
"]"
<<
" to "
<<
paddle
::
imperative
::
GetNameFromVar
(
out
)
<<
"["
<<
" to "
<<
paddle
::
imperative
::
GetNameFromVar
(
out
)
<<
"["
<<
paddle
::
framework
::
DataLayoutToString
(
<<
phi
::
DataLayoutToString
(
paddle
::
imperative
::
GetDataLayout
(
out
))
paddle
::
imperative
::
GetDataLayout
(
out
))
<<
"]"
;
<<
"]"
;
return
out
;
return
out
;
}
}
...
@@ -103,7 +101,7 @@ class LayoutTransformer {
...
@@ -103,7 +101,7 @@ class LayoutTransformer {
}
}
}
}
VLOG
(
3
)
<<
"Optimze Layout agnostic op: "
<<
type_
<<
" "
VLOG
(
3
)
<<
"Optimze Layout agnostic op: "
<<
type_
<<
" "
<<
p
addle
::
framework
::
DataLayoutToString
(
in_layout
);
<<
p
hi
::
DataLayoutToString
(
in_layout
);
if
(
in_layout
!=
DataLayout
::
UNDEFINED
)
{
if
(
in_layout
!=
DataLayout
::
UNDEFINED
)
{
SetVarsLayout
(
outs
,
in_layout
);
SetVarsLayout
(
outs
,
in_layout
);
}
}
...
@@ -185,8 +183,8 @@ class HeavilyLayoutSensitiveOpTransformer : public LayoutTransformer<VarType> {
...
@@ -185,8 +183,8 @@ class HeavilyLayoutSensitiveOpTransformer : public LayoutTransformer<VarType> {
// Step 1: Adjust the data_layout attr to the desired layout
// Step 1: Adjust the data_layout attr to the desired layout
auto
desired_layout
=
LayoutAutoTune
::
Instance
().
GetDesiredLayout
();
auto
desired_layout
=
LayoutAutoTune
::
Instance
().
GetDesiredLayout
();
std
::
string
desired_layout_str
=
paddle
::
framework
::
DataLayoutToString
(
std
::
string
desired_layout_str
=
LayoutAutoTune
::
Instance
().
GetDesiredLayout
());
phi
::
DataLayoutToString
(
LayoutAutoTune
::
Instance
().
GetDesiredLayout
());
if
(
attrs
->
find
(
"data_format"
)
!=
attrs
->
end
()
&&
if
(
attrs
->
find
(
"data_format"
)
!=
attrs
->
end
()
&&
PADDLE_GET_CONST
(
std
::
string
,
(
*
attrs
)[
"data_format"
])
!=
PADDLE_GET_CONST
(
std
::
string
,
(
*
attrs
)[
"data_format"
])
!=
desired_layout_str
)
{
desired_layout_str
)
{
...
@@ -252,10 +250,10 @@ class LightlyLayoutSensitiveOpTransformer : public LayoutTransformer<VarType> {
...
@@ -252,10 +250,10 @@ class LightlyLayoutSensitiveOpTransformer : public LayoutTransformer<VarType> {
for
(
auto
&
var
:
pair
.
second
)
{
for
(
auto
&
var
:
pair
.
second
)
{
if
(
var
!=
nullptr
)
{
if
(
var
!=
nullptr
)
{
VLOG
(
3
)
<<
"Tune the layout from "
VLOG
(
3
)
<<
"Tune the layout from "
<<
p
addle
::
framework
::
DataLayoutToString
(
<<
p
hi
::
DataLayoutToString
(
paddle
::
imperative
::
GetDataLayout
(
var
))
paddle
::
imperative
::
GetDataLayout
(
var
))
<<
" to "
<<
" to "
<<
p
addle
::
framework
::
DataLayoutToString
(
<<
p
hi
::
DataLayoutToString
(
LayoutAutoTune
::
Instance
().
GetDesiredLayout
());
LayoutAutoTune
::
Instance
().
GetDesiredLayout
());
}
}
if
(
var
!=
nullptr
&&
if
(
var
!=
nullptr
&&
...
...
paddle/fluid/imperative/var_helper.cc
浏览文件 @
ec749398
...
@@ -192,11 +192,11 @@ template framework::proto::VarType::Type GetDataType<VariableWrapper>(
...
@@ -192,11 +192,11 @@ template framework::proto::VarType::Type GetDataType<VariableWrapper>(
/* GetDataLayout */
/* GetDataLayout */
template
<
typename
VarType
>
template
<
typename
VarType
>
p
addle
::
experimental
::
DataLayout
GetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
)
{
p
hi
::
DataLayout
GetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
)
{
return
var
->
DataLayout
();
return
var
->
DataLayout
();
}
}
template
<
>
template
<
>
p
addle
::
experimental
::
DataLayout
GetDataLayout
<
egr
::
EagerVariable
>
(
p
hi
::
DataLayout
GetDataLayout
<
egr
::
EagerVariable
>
(
std
::
shared_ptr
<
egr
::
EagerVariable
>
var
)
{
std
::
shared_ptr
<
egr
::
EagerVariable
>
var
)
{
if
(
var
->
Var
().
IsType
<
phi
::
DenseTensor
>
())
{
if
(
var
->
Var
().
IsType
<
phi
::
DenseTensor
>
())
{
return
var
->
Var
().
Get
<
phi
::
DenseTensor
>
().
layout
();
return
var
->
Var
().
Get
<
phi
::
DenseTensor
>
().
layout
();
...
@@ -209,21 +209,18 @@ paddle::experimental::DataLayout GetDataLayout<egr::EagerVariable>(
...
@@ -209,21 +209,18 @@ paddle::experimental::DataLayout GetDataLayout<egr::EagerVariable>(
var
->
name
()));
var
->
name
()));
}
}
}
}
template
paddle
::
experimental
::
DataLayout
GetDataLayout
<
VarBase
>(
template
phi
::
DataLayout
GetDataLayout
<
VarBase
>(
std
::
shared_ptr
<
VarBase
>
var
);
std
::
shared_ptr
<
VarBase
>
var
);
template
phi
::
DataLayout
GetDataLayout
<
VariableWrapper
>(
template
paddle
::
experimental
::
DataLayout
GetDataLayout
<
VariableWrapper
>(
std
::
shared_ptr
<
VariableWrapper
>
var
);
std
::
shared_ptr
<
VariableWrapper
>
var
);
/* SetDataLayout */
/* SetDataLayout */
template
<
typename
VarType
>
template
<
typename
VarType
>
void
SetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
,
void
SetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
,
const
phi
::
DataLayout
layout
)
{
const
paddle
::
experimental
::
DataLayout
layout
)
{
var
->
SetDataLayout
(
layout
);
var
->
SetDataLayout
(
layout
);
}
}
template
<
>
template
<
>
void
SetDataLayout
<
egr
::
EagerVariable
>
(
void
SetDataLayout
<
egr
::
EagerVariable
>
(
std
::
shared_ptr
<
egr
::
EagerVariable
>
var
,
std
::
shared_ptr
<
egr
::
EagerVariable
>
var
,
const
phi
::
DataLayout
layout
)
{
const
paddle
::
experimental
::
DataLayout
layout
)
{
if
(
var
->
Var
().
IsType
<
phi
::
DenseTensor
>
())
{
if
(
var
->
Var
().
IsType
<
phi
::
DenseTensor
>
())
{
var
->
MutableVar
()
->
GetMutable
<
phi
::
DenseTensor
>
()
->
set_layout
(
layout
);
var
->
MutableVar
()
->
GetMutable
<
phi
::
DenseTensor
>
()
->
set_layout
(
layout
);
}
else
{
}
else
{
...
@@ -235,12 +232,10 @@ void SetDataLayout<egr::EagerVariable>(
...
@@ -235,12 +232,10 @@ void SetDataLayout<egr::EagerVariable>(
var
->
name
()));
var
->
name
()));
}
}
}
}
template
void
SetDataLayout
<
VarBase
>(
template
void
SetDataLayout
<
VarBase
>(
std
::
shared_ptr
<
VarBase
>
var
,
std
::
shared_ptr
<
VarBase
>
var
,
const
phi
::
DataLayout
layout
);
const
paddle
::
experimental
::
DataLayout
layout
);
template
void
SetDataLayout
<
VariableWrapper
>(
template
void
SetDataLayout
<
VariableWrapper
>(
std
::
shared_ptr
<
VariableWrapper
>
var
,
std
::
shared_ptr
<
VariableWrapper
>
var
,
const
phi
::
DataLayout
layout
);
const
paddle
::
experimental
::
DataLayout
layout
);
/* CheckCachedKey */
/* CheckCachedKey */
template
<
typename
VarType
>
template
<
typename
VarType
>
...
...
paddle/fluid/imperative/var_helper.h
浏览文件 @
ec749398
...
@@ -65,11 +65,10 @@ template <typename VarType>
...
@@ -65,11 +65,10 @@ template <typename VarType>
framework
::
proto
::
VarType
::
Type
GetDataType
(
std
::
shared_ptr
<
VarType
>
var
);
framework
::
proto
::
VarType
::
Type
GetDataType
(
std
::
shared_ptr
<
VarType
>
var
);
template
<
typename
VarType
>
template
<
typename
VarType
>
p
addle
::
experimental
::
DataLayout
GetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
);
p
hi
::
DataLayout
GetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
);
template
<
typename
VarType
>
template
<
typename
VarType
>
void
SetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
,
void
SetDataLayout
(
std
::
shared_ptr
<
VarType
>
var
,
const
phi
::
DataLayout
layout
);
const
paddle
::
experimental
::
DataLayout
layout
);
template
<
typename
VarType
>
template
<
typename
VarType
>
const
std
::
shared_ptr
<
VariableWrapper
>&
GetVariableWrapper
(
const
std
::
shared_ptr
<
VariableWrapper
>&
GetVariableWrapper
(
...
...
paddle/fluid/imperative/variable_wrapper.h
浏览文件 @
ec749398
...
@@ -187,11 +187,9 @@ class VariableWrapper {
...
@@ -187,11 +187,9 @@ class VariableWrapper {
return
fwd_data_type_
;
return
fwd_data_type_
;
}
}
p
addle
::
experimental
::
DataLayout
DataLayout
()
{
return
layout_
;
}
p
hi
::
DataLayout
DataLayout
()
{
return
layout_
;
}
void
SetDataLayout
(
const
paddle
::
experimental
::
DataLayout
layout
)
{
void
SetDataLayout
(
const
phi
::
DataLayout
layout
)
{
layout_
=
layout
;
}
layout_
=
layout
;
}
const
platform
::
Place
Place
()
const
{
const
platform
::
Place
Place
()
const
{
const
phi
::
DenseTensor
*
tensor
=
nullptr
;
const
phi
::
DenseTensor
*
tensor
=
nullptr
;
...
@@ -368,8 +366,7 @@ class VariableWrapper {
...
@@ -368,8 +366,7 @@ class VariableWrapper {
std
::
vector
<
std
::
shared_ptr
<
std
::
function
<
void
()
>>>
void_hooks_
;
std
::
vector
<
std
::
shared_ptr
<
std
::
function
<
void
()
>>>
void_hooks_
;
// DataLayout for layoutAutotune
// DataLayout for layoutAutotune
paddle
::
experimental
::
DataLayout
layout_
{
phi
::
DataLayout
layout_
{
phi
::
DataLayout
::
UNDEFINED
};
paddle
::
experimental
::
DataLayout
::
UNDEFINED
};
};
};
}
// namespace imperative
}
// namespace imperative
...
...
paddle/fluid/inference/api/details/zero_copy_tensor.cc
浏览文件 @
ec749398
...
@@ -309,12 +309,12 @@ struct DataTypeInfo<int32_t> {
...
@@ -309,12 +309,12 @@ struct DataTypeInfo<int32_t> {
paddle
::
experimental
::
DataType
TYPE
=
paddle
::
experimental
::
DataType
::
INT32
;
paddle
::
experimental
::
DataType
TYPE
=
paddle
::
experimental
::
DataType
::
INT32
;
};
};
p
addle
::
experimental
::
DataLayout
LayoutConvert
(
DataLayout
layout
)
{
p
hi
::
DataLayout
LayoutConvert
(
DataLayout
layout
)
{
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
layout
,
layout
,
DataLayout
::
kNCHW
,
DataLayout
::
kNCHW
,
paddle
::
platform
::
errors
::
InvalidArgument
(
"Only NCHW is supported now."
));
paddle
::
platform
::
errors
::
InvalidArgument
(
"Only NCHW is supported now."
));
return
p
addle
::
experimental
::
DataLayout
::
NCHW
;
return
p
hi
::
DataLayout
::
NCHW
;
}
}
template
<
typename
T
>
template
<
typename
T
>
...
@@ -377,7 +377,7 @@ void Tensor::CopyToCpuImpl(T *data,
...
@@ -377,7 +377,7 @@ void Tensor::CopyToCpuImpl(T *data,
if
(
paddle
::
platform
::
is_cpu_place
(
t_place
))
{
if
(
paddle
::
platform
::
is_cpu_place
(
t_place
))
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
(
tensor
->
layout
()
==
p
addle
::
framework
::
DataLayout
::
kMKLDNN
)
if
(
tensor
->
layout
()
==
p
hi
::
DataLayout
::
kMKLDNN
)
paddle
::
framework
::
innerTransDataLayoutFromMKLDNN
(
paddle
::
framework
::
innerTransDataLayoutFromMKLDNN
(
tensor
->
layout
(),
tensor
->
layout
(),
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
...
@@ -664,13 +664,12 @@ std::vector<int> Tensor::shape() const {
...
@@ -664,13 +664,12 @@ std::vector<int> Tensor::shape() const {
// mkldnn may does layout transform internally, so need to reorder before
// mkldnn may does layout transform internally, so need to reorder before
// return
// return
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
(
tensor
->
layout
()
==
paddle
::
framework
::
DataLayout
::
kMKLDNN
)
{
if
(
tensor
->
layout
()
==
phi
::
DataLayout
::
kMKLDNN
)
{
paddle
::
framework
::
DataLayout
out_layout
=
phi
::
DataLayout
out_layout
=
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
();
.
get_cur_paddle_data_layout
();
// Set default as NCHW in case not specified
// Set default as NCHW in case not specified
out_layout
=
out_layout
==
p
addle
::
framework
::
DataLayout
::
kAnyLayout
out_layout
=
out_layout
==
p
hi
::
DataLayout
::
kAnyLayout
?
p
addle
::
framework
::
DataLayout
::
kNCHW
?
p
hi
::
DataLayout
::
kNCHW
:
out_layout
;
:
out_layout
;
// In these data layouts, channel dimension is either on 2nd position: nChw
// In these data layouts, channel dimension is either on 2nd position: nChw
// or
// or
...
@@ -678,8 +677,8 @@ std::vector<int> Tensor::shape() const {
...
@@ -678,8 +677,8 @@ std::vector<int> Tensor::shape() const {
// be done. Similarly for dim==1 when you have just one possible
// be done. Similarly for dim==1 when you have just one possible
// combination.
// combination.
if
(
tensor
->
dims
().
size
()
<
3
)
return
phi
::
vectorize
<
int
>
(
tensor
->
dims
());
if
(
tensor
->
dims
().
size
()
<
3
)
return
phi
::
vectorize
<
int
>
(
tensor
->
dims
());
if
(
out_layout
==
p
addle
::
framework
::
DataLayout
::
kNHWC
||
if
(
out_layout
==
p
hi
::
DataLayout
::
kNHWC
||
out_layout
==
p
addle
::
framework
::
DataLayout
::
kNDHWC
)
{
out_layout
==
p
hi
::
DataLayout
::
kNDHWC
)
{
auto
dims
=
phi
::
vectorize
<
int
>
(
tensor
->
dims
());
auto
dims
=
phi
::
vectorize
<
int
>
(
tensor
->
dims
());
std
::
rotate
(
dims
.
begin
()
+
1
,
dims
.
begin
()
+
2
,
dims
.
end
());
std
::
rotate
(
dims
.
begin
()
+
1
,
dims
.
begin
()
+
2
,
dims
.
end
());
return
dims
;
return
dims
;
...
@@ -853,7 +852,7 @@ void InternalUtils::CopyToCpuWithIoStream(paddle_infer::Tensor *t,
...
@@ -853,7 +852,7 @@ void InternalUtils::CopyToCpuWithIoStream(paddle_infer::Tensor *t,
if
(
paddle
::
platform
::
is_cpu_place
(
t_place
))
{
if
(
paddle
::
platform
::
is_cpu_place
(
t_place
))
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
(
tensor
->
layout
()
==
p
addle
::
framework
::
DataLayout
::
kMKLDNN
)
if
(
tensor
->
layout
()
==
p
hi
::
DataLayout
::
kMKLDNN
)
paddle
::
framework
::
innerTransDataLayoutFromMKLDNN
(
paddle
::
framework
::
innerTransDataLayoutFromMKLDNN
(
tensor
->
layout
(),
tensor
->
layout
(),
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
...
...
paddle/fluid/inference/lite/tensor_utils.cc
浏览文件 @
ec749398
...
@@ -106,14 +106,14 @@ framework::proto::VarType::Type GetNativePrecisionType(
...
@@ -106,14 +106,14 @@ framework::proto::VarType::Type GetNativePrecisionType(
}
}
}
}
framework
::
DataLayout
GetNativeLayoutType
(
const
DataLayoutType
&
type
)
{
phi
::
DataLayout
GetNativeLayoutType
(
const
DataLayoutType
&
type
)
{
switch
(
type
)
{
switch
(
type
)
{
case
DataLayoutType
::
kNCHW
:
case
DataLayoutType
::
kNCHW
:
return
framework
::
DataLayout
::
kNCHW
;
return
phi
::
DataLayout
::
kNCHW
;
default:
default:
PADDLE_THROW
(
platform
::
errors
::
Unimplemented
(
PADDLE_THROW
(
platform
::
errors
::
Unimplemented
(
"Unsupported layout type. Now only supports NCHW."
));
"Unsupported layout type. Now only supports NCHW."
));
return
static_cast
<
framework
::
DataLayout
>
(
-
1
);
return
static_cast
<
phi
::
DataLayout
>
(
-
1
);
}
}
}
}
...
...
paddle/fluid/inference/lite/test_tensor_utils.cc
浏览文件 @
ec749398
...
@@ -68,9 +68,8 @@ TEST(LiteEngineOp, GetNativePrecisionType) {
...
@@ -68,9 +68,8 @@ TEST(LiteEngineOp, GetNativePrecisionType) {
TEST
(
LiteEngineOp
,
GetNativeLayoutType
)
{
TEST
(
LiteEngineOp
,
GetNativeLayoutType
)
{
::
testing
::
FLAGS_gtest_death_test_style
=
"threadsafe"
;
::
testing
::
FLAGS_gtest_death_test_style
=
"threadsafe"
;
framework
::
DataLayout
GetNativeLayoutType
(
const
DataLayoutType
&
type
);
phi
::
DataLayout
GetNativeLayoutType
(
const
DataLayoutType
&
type
);
ASSERT_EQ
(
GetNativeLayoutType
(
DataLayoutType
::
kNCHW
),
ASSERT_EQ
(
GetNativeLayoutType
(
DataLayoutType
::
kNCHW
),
phi
::
DataLayout
::
kNCHW
);
framework
::
DataLayout
::
kNCHW
);
EXPECT_ANY_THROW
(
GetNativeLayoutType
(
DataLayoutType
::
kNHWC
));
EXPECT_ANY_THROW
(
GetNativeLayoutType
(
DataLayoutType
::
kNHWC
));
}
}
...
...
paddle/fluid/inference/tensorrt/convert/bilinear_interp_v2_op.cc
浏览文件 @
ec749398
...
@@ -42,7 +42,7 @@ class BilinearInterpolateV2OpConverter : public OpConverter {
...
@@ -42,7 +42,7 @@ class BilinearInterpolateV2OpConverter : public OpConverter {
auto
input
=
engine_
->
GetITensor
(
input_name
);
auto
input
=
engine_
->
GetITensor
(
input_name
);
auto
data_layout
=
framework
::
StringToDataLayout
(
auto
data_layout
=
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"data_layout"
)));
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"data_layout"
)));
auto
interp_method
=
auto
interp_method
=
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"interp_method"
));
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"interp_method"
));
...
@@ -86,9 +86,8 @@ class BilinearInterpolateV2OpConverter : public OpConverter {
...
@@ -86,9 +86,8 @@ class BilinearInterpolateV2OpConverter : public OpConverter {
// axis are different in static/dynamic mode
// axis are different in static/dynamic mode
bool
with_dynamic
=
engine_
->
with_dynamic_shape
();
bool
with_dynamic
=
engine_
->
with_dynamic_shape
();
int
h_axis
=
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
+
with_dynamic
;
int
h_axis
=
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
+
with_dynamic
;
int
w_axis
=
int
w_axis
=
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
+
1
+
with_dynamic
;
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
+
1
+
with_dynamic
;
if
(
scale_w
>
0.
&&
scale_h
>
0.
)
{
if
(
scale_w
>
0.
&&
scale_h
>
0.
)
{
out_h
=
static_cast
<
int
>
(
in_dim
.
d
[
h_axis
]
*
scale_h
);
out_h
=
static_cast
<
int
>
(
in_dim
.
d
[
h_axis
]
*
scale_h
);
...
@@ -108,11 +107,11 @@ class BilinearInterpolateV2OpConverter : public OpConverter {
...
@@ -108,11 +107,11 @@ class BilinearInterpolateV2OpConverter : public OpConverter {
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
}
}
if
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
{
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
scale_w
);
}
else
if
(
data_layout
==
framework
::
DataLayout
::
kNHWC
)
{
}
else
if
(
data_layout
==
phi
::
DataLayout
::
kNHWC
)
{
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
...
...
paddle/fluid/inference/tensorrt/convert/nearest_interp_op.cc
浏览文件 @
ec749398
...
@@ -40,8 +40,8 @@ class NearestInterpolateOpConverter : public OpConverter {
...
@@ -40,8 +40,8 @@ class NearestInterpolateOpConverter : public OpConverter {
auto
input
=
engine_
->
GetITensor
(
input_name
);
auto
input
=
engine_
->
GetITensor
(
input_name
);
auto
data_layout
=
!
op_desc
.
HasAttr
(
"data_layout"
)
auto
data_layout
=
!
op_desc
.
HasAttr
(
"data_layout"
)
?
framework
::
DataLayout
::
kNCHW
?
phi
::
DataLayout
::
kNCHW
:
framework
::
StringToDataLayout
(
PADDLE_GET_CONST
(
:
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"data_layout"
)));
std
::
string
,
op_desc
.
GetAttr
(
"data_layout"
)));
auto
interp_method
=
auto
interp_method
=
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"interp_method"
));
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"interp_method"
));
...
@@ -70,10 +70,8 @@ class NearestInterpolateOpConverter : public OpConverter {
...
@@ -70,10 +70,8 @@ class NearestInterpolateOpConverter : public OpConverter {
bool
with_dynamic
=
engine_
->
with_dynamic_shape
();
bool
with_dynamic
=
engine_
->
with_dynamic_shape
();
if
(
!
with_dynamic
)
{
if
(
!
with_dynamic
)
{
int
h_axis
=
int
h_axis
=
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
+
with_dynamic
;
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
+
with_dynamic
;
int
w_axis
=
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
+
1
+
with_dynamic
;
int
w_axis
=
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
+
1
+
with_dynamic
;
scale_h
=
scale_h
=
static_cast
<
float
>
(
out_h
)
/
static_cast
<
float
>
(
in_dim
.
d
[
h_axis
]);
static_cast
<
float
>
(
out_h
)
/
static_cast
<
float
>
(
in_dim
.
d
[
h_axis
]);
...
@@ -86,11 +84,11 @@ class NearestInterpolateOpConverter : public OpConverter {
...
@@ -86,11 +84,11 @@ class NearestInterpolateOpConverter : public OpConverter {
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
}
}
if
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
{
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
scale_w
);
}
else
if
(
data_layout
==
framework
::
DataLayout
::
kNHWC
)
{
}
else
if
(
data_layout
==
phi
::
DataLayout
::
kNHWC
)
{
// NHWC
// NHWC
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
scale_w
);
...
...
paddle/fluid/inference/tensorrt/convert/nearest_interp_v2_op.cc
浏览文件 @
ec749398
...
@@ -39,7 +39,7 @@ class NearestInterpolateV2OpConverter : public OpConverter {
...
@@ -39,7 +39,7 @@ class NearestInterpolateV2OpConverter : public OpConverter {
auto
input
=
engine_
->
GetITensor
(
input_name
);
auto
input
=
engine_
->
GetITensor
(
input_name
);
auto
data_layout
=
framework
::
StringToDataLayout
(
auto
data_layout
=
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"data_layout"
)));
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"data_layout"
)));
auto
interp_method
=
auto
interp_method
=
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"interp_method"
));
PADDLE_GET_CONST
(
std
::
string
,
op_desc
.
GetAttr
(
"interp_method"
));
...
@@ -65,9 +65,8 @@ class NearestInterpolateV2OpConverter : public OpConverter {
...
@@ -65,9 +65,8 @@ class NearestInterpolateV2OpConverter : public OpConverter {
// axis are different in static/dynamic mode
// axis are different in static/dynamic mode
bool
with_dynamic
=
engine_
->
with_dynamic_shape
();
bool
with_dynamic
=
engine_
->
with_dynamic_shape
();
int
h_axis
=
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
+
with_dynamic
;
int
h_axis
=
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
+
with_dynamic
;
int
w_axis
=
int
w_axis
=
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
+
1
+
with_dynamic
;
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
+
1
+
with_dynamic
;
scale_h
=
scale_h
=
static_cast
<
float
>
(
out_h
)
/
static_cast
<
float
>
(
in_dim
.
d
[
h_axis
]);
static_cast
<
float
>
(
out_h
)
/
static_cast
<
float
>
(
in_dim
.
d
[
h_axis
]);
...
@@ -82,11 +81,11 @@ class NearestInterpolateV2OpConverter : public OpConverter {
...
@@ -82,11 +81,11 @@ class NearestInterpolateV2OpConverter : public OpConverter {
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
}
}
if
(
data_layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
data_layout
==
phi
::
DataLayout
::
kNCHW
)
{
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
1.
f
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
scale_w
);
}
else
if
(
data_layout
==
framework
::
DataLayout
::
kNHWC
)
{
}
else
if
(
data_layout
==
phi
::
DataLayout
::
kNHWC
)
{
// NHWC
// NHWC
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_h
);
scales
.
push_back
(
scale_w
);
scales
.
push_back
(
scale_w
);
...
...
paddle/fluid/inference/tensorrt/op_teller.cc
浏览文件 @
ec749398
...
@@ -635,9 +635,9 @@ struct SimpleOpTypeSetTeller : public Teller {
...
@@ -635,9 +635,9 @@ struct SimpleOpTypeSetTeller : public Teller {
if
(
op_type
==
"affine_channel"
)
{
if
(
op_type
==
"affine_channel"
)
{
if
(
!
desc
.
HasAttr
(
"data_layout"
))
return
false
;
if
(
!
desc
.
HasAttr
(
"data_layout"
))
return
false
;
auto
data_layout
=
framework
::
StringToDataLayout
(
auto
data_layout
=
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
if
(
data_layout
!=
framework
::
DataLayout
::
kNCHW
)
return
false
;
if
(
data_layout
!=
phi
::
DataLayout
::
kNCHW
)
return
false
;
auto
*
block
=
desc
.
Block
();
auto
*
block
=
desc
.
Block
();
if
(
block
==
nullptr
)
{
if
(
block
==
nullptr
)
{
...
@@ -710,10 +710,10 @@ struct SimpleOpTypeSetTeller : public Teller {
...
@@ -710,10 +710,10 @@ struct SimpleOpTypeSetTeller : public Teller {
if
(
!
desc
.
HasAttr
(
attr
))
return
false
;
if
(
!
desc
.
HasAttr
(
attr
))
return
false
;
}
}
if
(
desc
.
HasAttr
(
"data_layout"
))
{
if
(
desc
.
HasAttr
(
"data_layout"
))
{
auto
data_layout
=
framework
::
StringToDataLayout
(
auto
data_layout
=
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
if
(
data_layout
!=
framework
::
DataLayout
::
kNCHW
&&
if
(
data_layout
!=
phi
::
DataLayout
::
kNCHW
&&
data_layout
!=
framework
::
DataLayout
::
kNHWC
)
data_layout
!=
phi
::
DataLayout
::
kNHWC
)
return
false
;
return
false
;
}
}
auto
interp_method
=
auto
interp_method
=
...
@@ -755,10 +755,10 @@ struct SimpleOpTypeSetTeller : public Teller {
...
@@ -755,10 +755,10 @@ struct SimpleOpTypeSetTeller : public Teller {
for
(
auto
const
attr
:
attrs
)
{
for
(
auto
const
attr
:
attrs
)
{
if
(
!
desc
.
HasAttr
(
attr
))
return
false
;
if
(
!
desc
.
HasAttr
(
attr
))
return
false
;
}
}
auto
data_layout
=
framework
::
StringToDataLayout
(
auto
data_layout
=
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
if
(
data_layout
!=
framework
::
DataLayout
::
kNCHW
&&
if
(
data_layout
!=
phi
::
DataLayout
::
kNCHW
&&
data_layout
!=
framework
::
DataLayout
::
kNHWC
)
data_layout
!=
phi
::
DataLayout
::
kNHWC
)
return
false
;
return
false
;
auto
interp_method
=
auto
interp_method
=
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"interp_method"
));
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"interp_method"
));
...
@@ -809,10 +809,10 @@ struct SimpleOpTypeSetTeller : public Teller {
...
@@ -809,10 +809,10 @@ struct SimpleOpTypeSetTeller : public Teller {
}
}
}
}
auto
data_layout
=
framework
::
StringToDataLayout
(
auto
data_layout
=
phi
::
StringToDataLayout
(
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
PADDLE_GET_CONST
(
std
::
string
,
desc
.
GetAttr
(
"data_layout"
)));
if
(
data_layout
!=
framework
::
DataLayout
::
kNCHW
&&
if
(
data_layout
!=
phi
::
DataLayout
::
kNCHW
&&
data_layout
!=
framework
::
DataLayout
::
kNHWC
)
{
data_layout
!=
phi
::
DataLayout
::
kNHWC
)
{
VLOG
(
3
)
<<
"The op_type "
<<
op_type
VLOG
(
3
)
<<
"The op_type "
<<
op_type
<<
" is not NCHW or NHWC return false"
;
<<
" is not NCHW or NHWC return false"
;
return
false
;
return
false
;
...
...
paddle/fluid/inference/tensorrt/plugin/group_norm_op_plugin.cu
浏览文件 @
ec749398
...
@@ -23,7 +23,7 @@ namespace paddle {
...
@@ -23,7 +23,7 @@ namespace paddle {
namespace
inference
{
namespace
inference
{
namespace
tensorrt
{
namespace
tensorrt
{
namespace
plugin
{
namespace
plugin
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
int
GroupNormPlugin
::
initialize
()
TRT_NOEXCEPT
{
return
0
;
}
int
GroupNormPlugin
::
initialize
()
TRT_NOEXCEPT
{
return
0
;
}
...
...
paddle/fluid/operators/activation_op.cc
浏览文件 @
ec749398
...
@@ -119,13 +119,13 @@ class ActivationOp : public framework::OperatorWithKernel {
...
@@ -119,13 +119,13 @@ class ActivationOp : public framework::OperatorWithKernel {
// When activation is first oneDNN op (there was some non oneDNN op
// When activation is first oneDNN op (there was some non oneDNN op
// previously)
// previously)
// then we also need to rotate shape NHWC -> NCWH
// then we also need to rotate shape NHWC -> NCWH
if
((
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
)
&&
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
()
==
framework
::
DataLayout
::
kNHWC
)
{
.
get_cur_paddle_data_layout
()
==
phi
::
DataLayout
::
kNHWC
)
{
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
tensor
.
place
(),
framework
::
DataLayout
::
kNHWC
);
phi
::
DataLayout
::
kNHWC
);
}
}
#endif
#endif
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
...
@@ -1269,7 +1269,7 @@ class LogitOp : public framework::OperatorWithKernel {
...
@@ -1269,7 +1269,7 @@ class LogitOp : public framework::OperatorWithKernel {
framework
::
OpKernelType
GetExpectedKernelType
(
framework
::
OpKernelType
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
framework
::
LibraryType
library
{
framework
::
LibraryType
::
kPlain
};
framework
::
LibraryType
library
{
framework
::
LibraryType
::
kPlain
};
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
...
@@ -1304,7 +1304,7 @@ class LogitGradOp : public framework::OperatorWithKernel {
...
@@ -1304,7 +1304,7 @@ class LogitGradOp : public framework::OperatorWithKernel {
framework
::
OpKernelType
GetExpectedKernelType
(
framework
::
OpKernelType
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
framework
::
LibraryType
library
{
framework
::
LibraryType
::
kPlain
};
framework
::
LibraryType
library
{
framework
::
LibraryType
::
kPlain
};
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
}
}
...
...
paddle/fluid/operators/affine_channel_op.cc
浏览文件 @
ec749398
...
@@ -70,12 +70,12 @@ class AffineChannelOp : public framework::OperatorWithKernel {
...
@@ -70,12 +70,12 @@ class AffineChannelOp : public framework::OperatorWithKernel {
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
auto
scale_dims
=
ctx
->
GetInputDim
(
"Scale"
);
auto
scale_dims
=
ctx
->
GetInputDim
(
"Scale"
);
auto
b_dims
=
ctx
->
GetInputDim
(
"Bias"
);
auto
b_dims
=
ctx
->
GetInputDim
(
"Bias"
);
const
framework
::
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
phi
::
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
const
int64_t
C
=
(
data_layout
==
framework
::
DataLayout
::
kNCHW
const
int64_t
C
=
?
x_dims
[
1
]
(
data_layout
==
phi
::
DataLayout
::
kNCHW
?
x_dims
[
1
]
:
x_dims
[
x_dims
.
size
()
-
1
]);
:
x_dims
[
x_dims
.
size
()
-
1
]);
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
scale_dims
.
size
(),
scale_dims
.
size
(),
...
@@ -195,13 +195,12 @@ class AffineChannelKernel : public framework::OpKernel<T> {
...
@@ -195,13 +195,12 @@ class AffineChannelKernel : public framework::OpKernel<T> {
auto
*
y
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
"Out"
);
auto
*
y
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
"Out"
);
y
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
y
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
const
framework
::
DataLayout
layout
=
const
phi
::
DataLayout
layout
=
framework
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
auto
dims
=
x
->
dims
();
auto
dims
=
x
->
dims
();
int
N
=
dims
[
0
];
int
N
=
dims
[
0
];
int
C
=
layout
==
framework
::
DataLayout
::
kNCHW
?
dims
[
1
]
int
C
=
layout
==
phi
::
DataLayout
::
kNCHW
?
dims
[
1
]
:
dims
[
dims
.
size
()
-
1
];
:
dims
[
dims
.
size
()
-
1
];
int
HxW
=
x
->
numel
()
/
N
/
C
;
int
HxW
=
x
->
numel
()
/
N
/
C
;
auto
*
scale_d
=
scale
->
data
<
T
>
();
auto
*
scale_d
=
scale
->
data
<
T
>
();
...
@@ -211,7 +210,7 @@ class AffineChannelKernel : public framework::OpKernel<T> {
...
@@ -211,7 +210,7 @@ class AffineChannelKernel : public framework::OpKernel<T> {
auto
*
x_d
=
x
->
data
<
T
>
();
auto
*
x_d
=
x
->
data
<
T
>
();
auto
*
y_d
=
y
->
data
<
T
>
();
auto
*
y_d
=
y
->
data
<
T
>
();
if
(
layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
layout
==
phi
::
DataLayout
::
kNCHW
)
{
int
stride
=
C
*
HxW
;
int
stride
=
C
*
HxW
;
for
(
int
i
=
0
;
i
<
N
;
i
++
)
{
for
(
int
i
=
0
;
i
<
N
;
i
++
)
{
ConstEigenArrayMap
<
T
>
x_e
(
x_d
,
HxW
,
C
);
ConstEigenArrayMap
<
T
>
x_e
(
x_d
,
HxW
,
C
);
...
@@ -242,13 +241,12 @@ class AffineChannelGradKernel : public framework::OpKernel<T> {
...
@@ -242,13 +241,12 @@ class AffineChannelGradKernel : public framework::OpKernel<T> {
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Scale"
));
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Scale"
));
auto
*
dbias
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Bias"
));
auto
*
dbias
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Bias"
));
const
framework
::
DataLayout
layout
=
const
phi
::
DataLayout
layout
=
framework
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
auto
dims
=
x
->
dims
();
auto
dims
=
x
->
dims
();
int
N
=
dims
[
0
];
int
N
=
dims
[
0
];
int
C
=
layout
==
framework
::
DataLayout
::
kNCHW
?
dims
[
1
]
int
C
=
layout
==
phi
::
DataLayout
::
kNCHW
?
dims
[
1
]
:
dims
[
dims
.
size
()
-
1
];
:
dims
[
dims
.
size
()
-
1
];
int
HxW
=
x
->
numel
()
/
N
/
C
;
int
HxW
=
x
->
numel
()
/
N
/
C
;
auto
*
dy_d
=
dy
->
data
<
T
>
();
auto
*
dy_d
=
dy
->
data
<
T
>
();
...
@@ -261,7 +259,7 @@ class AffineChannelGradKernel : public framework::OpKernel<T> {
...
@@ -261,7 +259,7 @@ class AffineChannelGradKernel : public framework::OpKernel<T> {
EigenVectorArrayMap
<
T
>
dscale_e
(
dscale_d
,
C
);
EigenVectorArrayMap
<
T
>
dscale_e
(
dscale_d
,
C
);
EigenVectorArrayMap
<
T
>
dbias_e
(
dbias_d
,
C
);
EigenVectorArrayMap
<
T
>
dbias_e
(
dbias_d
,
C
);
if
(
layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
layout
==
phi
::
DataLayout
::
kNCHW
)
{
// compute dscale and dbias
// compute dscale and dbias
int
stride
=
C
*
HxW
;
int
stride
=
C
*
HxW
;
auto
*
original_dy_d
=
dy_d
;
auto
*
original_dy_d
=
dy_d
;
...
...
paddle/fluid/operators/affine_channel_op.cu
浏览文件 @
ec749398
...
@@ -28,7 +28,7 @@ namespace cub = hipcub;
...
@@ -28,7 +28,7 @@ namespace cub = hipcub;
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
template
<
typename
T
,
framework
::
DataLayout
layout
,
bool
HasBias
>
template
<
typename
T
,
phi
::
DataLayout
layout
,
bool
HasBias
>
__global__
void
KeAffineChannelCUDA
(
const
T
*
x
,
__global__
void
KeAffineChannelCUDA
(
const
T
*
x
,
const
T
*
scale
,
const
T
*
scale
,
const
T
*
bias
,
const
T
*
bias
,
...
@@ -39,7 +39,7 @@ __global__ void KeAffineChannelCUDA(const T* x,
...
@@ -39,7 +39,7 @@ __global__ void KeAffineChannelCUDA(const T* x,
int
gid
=
blockIdx
.
x
*
blockDim
.
x
+
threadIdx
.
x
;
int
gid
=
blockIdx
.
x
*
blockDim
.
x
+
threadIdx
.
x
;
int
stride
=
blockDim
.
x
*
gridDim
.
x
;
int
stride
=
blockDim
.
x
*
gridDim
.
x
;
for
(
int
i
=
gid
;
i
<
num
;
i
+=
stride
)
{
for
(
int
i
=
gid
;
i
<
num
;
i
+=
stride
)
{
const
int
c
=
layout
==
framework
::
DataLayout
::
kNCHW
?
i
/
HxW
%
C
:
i
%
C
;
const
int
c
=
layout
==
phi
::
DataLayout
::
kNCHW
?
i
/
HxW
%
C
:
i
%
C
;
if
(
HasBias
)
{
if
(
HasBias
)
{
y
[
i
]
=
scale
[
c
]
*
x
[
i
]
+
bias
[
c
];
y
[
i
]
=
scale
[
c
]
*
x
[
i
]
+
bias
[
c
];
}
else
{
}
else
{
...
@@ -59,15 +59,14 @@ class AffineChannelCUDAKernel : public framework::OpKernel<T> {
...
@@ -59,15 +59,14 @@ class AffineChannelCUDAKernel : public framework::OpKernel<T> {
auto
*
y
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
"Out"
);
auto
*
y
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
"Out"
);
y
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
y
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
const
framework
::
DataLayout
layout
=
const
phi
::
DataLayout
layout
=
framework
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
auto
dims
=
x
->
dims
();
auto
dims
=
x
->
dims
();
const
int
num
=
x
->
numel
();
const
int
num
=
x
->
numel
();
int
N
=
dims
[
0
];
int
N
=
dims
[
0
];
int
C
=
layout
==
framework
::
DataLayout
::
kNCHW
?
dims
[
1
]
int
C
=
layout
==
phi
::
DataLayout
::
kNCHW
?
dims
[
1
]
:
dims
[
dims
.
size
()
-
1
];
:
dims
[
dims
.
size
()
-
1
];
int
HxW
=
num
/
N
/
C
;
int
HxW
=
num
/
N
/
C
;
const
T
*
x_d
=
x
->
data
<
T
>
();
const
T
*
x_d
=
x
->
data
<
T
>
();
...
@@ -84,19 +83,19 @@ class AffineChannelCUDAKernel : public framework::OpKernel<T> {
...
@@ -84,19 +83,19 @@ class AffineChannelCUDAKernel : public framework::OpKernel<T> {
int
max_threads
=
dev_ctx
.
GetMaxPhysicalThreadCount
();
int
max_threads
=
dev_ctx
.
GetMaxPhysicalThreadCount
();
grid
=
std
::
min
(
std
::
max
(
max_threads
/
block
,
1
),
grid
);
grid
=
std
::
min
(
std
::
max
(
max_threads
/
block
,
1
),
grid
);
if
(
layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
layout
==
phi
::
DataLayout
::
kNCHW
)
{
KeAffineChannelCUDA
<
T
,
framework
::
DataLayout
::
kNCHW
,
true
>
KeAffineChannelCUDA
<
T
,
phi
::
DataLayout
::
kNCHW
,
true
>
<<<
grid
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
<<<
grid
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
x_d
,
scale_d
,
bias_d
,
C
,
HxW
,
num
,
y_d
);
x_d
,
scale_d
,
bias_d
,
C
,
HxW
,
num
,
y_d
);
}
else
{
}
else
{
KeAffineChannelCUDA
<
T
,
framework
::
DataLayout
::
kNHWC
,
true
>
KeAffineChannelCUDA
<
T
,
phi
::
DataLayout
::
kNHWC
,
true
>
<<<
grid
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
<<<
grid
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
x_d
,
scale_d
,
bias_d
,
C
,
HxW
,
num
,
y_d
);
x_d
,
scale_d
,
bias_d
,
C
,
HxW
,
num
,
y_d
);
}
}
}
}
};
};
template
<
typename
T
,
int
BlockDim
,
framework
::
DataLayout
layout
>
template
<
typename
T
,
int
BlockDim
,
phi
::
DataLayout
layout
>
__global__
void
AffineChannelScaleBiasGradientCUDAKernel
(
const
T
*
dy
,
__global__
void
AffineChannelScaleBiasGradientCUDAKernel
(
const
T
*
dy
,
const
T
*
x
,
const
T
*
x
,
const
int
N
,
const
int
N
,
...
@@ -114,7 +113,7 @@ __global__ void AffineChannelScaleBiasGradientCUDAKernel(const T* dy,
...
@@ -114,7 +113,7 @@ __global__ void AffineChannelScaleBiasGradientCUDAKernel(const T* dy,
T
ds_sum
=
0
;
T
ds_sum
=
0
;
T
db_sum
=
0
;
T
db_sum
=
0
;
for
(
int
j
=
threadIdx
.
x
;
j
<
inner_size
;
j
+=
blockDim
.
x
)
{
for
(
int
j
=
threadIdx
.
x
;
j
<
inner_size
;
j
+=
blockDim
.
x
)
{
const
int
index
=
layout
==
framework
::
DataLayout
::
kNCHW
const
int
index
=
layout
==
phi
::
DataLayout
::
kNCHW
?
(
j
/
HxW
*
C
+
i
)
*
HxW
+
j
%
HxW
?
(
j
/
HxW
*
C
+
i
)
*
HxW
+
j
%
HxW
:
j
*
outer_size
+
i
;
:
j
*
outer_size
+
i
;
ds_sum
+=
dy
[
index
]
*
x
[
index
];
ds_sum
+=
dy
[
index
]
*
x
[
index
];
...
@@ -147,15 +146,14 @@ class AffineChannelGradCUDAKernel : public framework::OpKernel<T> {
...
@@ -147,15 +146,14 @@ class AffineChannelGradCUDAKernel : public framework::OpKernel<T> {
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Scale"
));
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Scale"
));
auto
*
dbias
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Bias"
));
auto
*
dbias
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Bias"
));
const
framework
::
DataLayout
layout
=
const
phi
::
DataLayout
layout
=
framework
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
auto
dims
=
dy
->
dims
();
auto
dims
=
dy
->
dims
();
const
int
num
=
dy
->
numel
();
const
int
num
=
dy
->
numel
();
int
N
=
dims
[
0
];
int
N
=
dims
[
0
];
int
C
=
layout
==
framework
::
DataLayout
::
kNCHW
?
dims
[
1
]
int
C
=
layout
==
phi
::
DataLayout
::
kNCHW
?
dims
[
1
]
:
dims
[
dims
.
size
()
-
1
];
:
dims
[
dims
.
size
()
-
1
];
int
HxW
=
num
/
N
/
C
;
int
HxW
=
num
/
N
/
C
;
const
T
*
dy_d
=
dy
->
data
<
T
>
();
const
T
*
dy_d
=
dy
->
data
<
T
>
();
...
@@ -174,17 +172,17 @@ class AffineChannelGradCUDAKernel : public framework::OpKernel<T> {
...
@@ -174,17 +172,17 @@ class AffineChannelGradCUDAKernel : public framework::OpKernel<T> {
const
int
max_blocks
=
std
::
max
(
max_threads
/
block
,
1
);
const
int
max_blocks
=
std
::
max
(
max_threads
/
block
,
1
);
int
grid1
=
(
num
+
block
-
1
)
/
block
;
int
grid1
=
(
num
+
block
-
1
)
/
block
;
int
grid2
=
std
::
min
(
C
,
max_blocks
);
int
grid2
=
std
::
min
(
C
,
max_blocks
);
if
(
layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
layout
==
phi
::
DataLayout
::
kNCHW
)
{
if
(
dscale
&&
dbias
)
{
if
(
dscale
&&
dbias
)
{
const
T
*
x_d
=
x
->
data
<
T
>
();
const
T
*
x_d
=
x
->
data
<
T
>
();
AffineChannelScaleBiasGradientCUDAKernel
<
T
,
AffineChannelScaleBiasGradientCUDAKernel
<
T
,
block
,
block
,
framework
::
DataLayout
::
kNCHW
>
phi
::
DataLayout
::
kNCHW
>
<<<
grid2
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
<<<
grid2
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
dy_d
,
x_d
,
N
,
C
,
HxW
,
ds_d
,
db_d
);
dy_d
,
x_d
,
N
,
C
,
HxW
,
ds_d
,
db_d
);
}
}
if
(
dx
)
{
if
(
dx
)
{
KeAffineChannelCUDA
<
T
,
framework
::
DataLayout
::
kNCHW
,
false
>
KeAffineChannelCUDA
<
T
,
phi
::
DataLayout
::
kNCHW
,
false
>
<<<
grid1
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
<<<
grid1
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
dy_d
,
s_d
,
nullptr
,
C
,
HxW
,
num
,
dx_d
);
dy_d
,
s_d
,
nullptr
,
C
,
HxW
,
num
,
dx_d
);
}
}
...
@@ -193,13 +191,13 @@ class AffineChannelGradCUDAKernel : public framework::OpKernel<T> {
...
@@ -193,13 +191,13 @@ class AffineChannelGradCUDAKernel : public framework::OpKernel<T> {
const
T
*
x_d
=
x
->
data
<
T
>
();
const
T
*
x_d
=
x
->
data
<
T
>
();
AffineChannelScaleBiasGradientCUDAKernel
<
T
,
AffineChannelScaleBiasGradientCUDAKernel
<
T
,
block
,
block
,
framework
::
DataLayout
::
kNHWC
>
phi
::
DataLayout
::
kNHWC
>
<<<
grid2
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
<<<
grid2
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
dy_d
,
x_d
,
N
,
C
,
HxW
,
ds_d
,
db_d
);
dy_d
,
x_d
,
N
,
C
,
HxW
,
ds_d
,
db_d
);
}
}
if
(
dx
)
{
if
(
dx
)
{
KeAffineChannelCUDA
<
T
,
framework
::
DataLayout
::
kNHWC
,
false
>
KeAffineChannelCUDA
<
T
,
phi
::
DataLayout
::
kNHWC
,
false
>
<<<
grid1
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
<<<
grid1
,
block
,
0
,
dev_ctx
.
stream
()
>>>
(
dy_d
,
s_d
,
nullptr
,
C
,
HxW
,
num
,
dx_d
);
dy_d
,
s_d
,
nullptr
,
C
,
HxW
,
num
,
dx_d
);
}
}
...
...
paddle/fluid/operators/affine_channel_op_xpu.cc
浏览文件 @
ec749398
...
@@ -36,13 +36,12 @@ class AffineChannelXPUKernel : public framework::OpKernel<T> {
...
@@ -36,13 +36,12 @@ class AffineChannelXPUKernel : public framework::OpKernel<T> {
auto
*
y
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
"Out"
);
auto
*
y
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
"Out"
);
y
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
y
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
const
framework
::
DataLayout
layout
=
const
phi
::
DataLayout
layout
=
framework
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
auto
dims
=
x
->
dims
();
auto
dims
=
x
->
dims
();
int
N
=
dims
[
0
];
int
N
=
dims
[
0
];
int
C
=
layout
==
framework
::
DataLayout
::
kNCHW
?
dims
[
1
]
int
C
=
layout
==
phi
::
DataLayout
::
kNCHW
?
dims
[
1
]
:
dims
[
dims
.
size
()
-
1
];
:
dims
[
dims
.
size
()
-
1
];
int
HxW
=
x
->
numel
()
/
N
/
C
;
int
HxW
=
x
->
numel
()
/
N
/
C
;
auto
*
scale_d
=
scale
->
data
<
T
>
();
auto
*
scale_d
=
scale
->
data
<
T
>
();
...
@@ -53,7 +52,7 @@ class AffineChannelXPUKernel : public framework::OpKernel<T> {
...
@@ -53,7 +52,7 @@ class AffineChannelXPUKernel : public framework::OpKernel<T> {
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
auto
&
dev_ctx
=
ctx
.
template
device_context
<
DeviceContext
>();
std
::
vector
<
int
>
x_shape
;
std
::
vector
<
int
>
x_shape
;
std
::
vector
<
int
>
b_shape
;
std
::
vector
<
int
>
b_shape
;
if
(
layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
layout
==
phi
::
DataLayout
::
kNCHW
)
{
x_shape
.
push_back
(
N
);
x_shape
.
push_back
(
N
);
x_shape
.
push_back
(
C
);
x_shape
.
push_back
(
C
);
x_shape
.
push_back
(
HxW
);
x_shape
.
push_back
(
HxW
);
...
@@ -99,13 +98,12 @@ class AffineChannelGradXPUKernel : public framework::OpKernel<T> {
...
@@ -99,13 +98,12 @@ class AffineChannelGradXPUKernel : public framework::OpKernel<T> {
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Scale"
));
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Scale"
));
auto
*
dbias
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Bias"
));
auto
*
dbias
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Bias"
));
const
framework
::
DataLayout
layout
=
const
phi
::
DataLayout
layout
=
framework
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
));
auto
dims
=
x
->
dims
();
auto
dims
=
x
->
dims
();
int
N
=
dims
[
0
];
int
N
=
dims
[
0
];
int
C
=
layout
==
framework
::
DataLayout
::
kNCHW
?
dims
[
1
]
int
C
=
layout
==
phi
::
DataLayout
::
kNCHW
?
dims
[
1
]
:
dims
[
dims
.
size
()
-
1
];
:
dims
[
dims
.
size
()
-
1
];
int
HxW
=
x
->
numel
()
/
N
/
C
;
int
HxW
=
x
->
numel
()
/
N
/
C
;
auto
*
dy_d
=
dy
->
data
<
T
>
();
auto
*
dy_d
=
dy
->
data
<
T
>
();
...
@@ -119,7 +117,7 @@ class AffineChannelGradXPUKernel : public framework::OpKernel<T> {
...
@@ -119,7 +117,7 @@ class AffineChannelGradXPUKernel : public framework::OpKernel<T> {
std
::
vector
<
int
>
x_shape
;
std
::
vector
<
int
>
x_shape
;
std
::
vector
<
int
>
b_shape
;
std
::
vector
<
int
>
b_shape
;
std
::
vector
<
int
>
rdims
;
std
::
vector
<
int
>
rdims
;
if
(
layout
==
framework
::
DataLayout
::
kNCHW
)
{
if
(
layout
==
phi
::
DataLayout
::
kNCHW
)
{
x_shape
.
push_back
(
N
);
x_shape
.
push_back
(
N
);
x_shape
.
push_back
(
C
);
x_shape
.
push_back
(
C
);
x_shape
.
push_back
(
HxW
);
x_shape
.
push_back
(
HxW
);
...
...
paddle/fluid/operators/affine_grid_op.cc
浏览文件 @
ec749398
...
@@ -142,7 +142,7 @@ class AffineGridOp : public framework::OperatorWithKernel {
...
@@ -142,7 +142,7 @@ class AffineGridOp : public framework::OperatorWithKernel {
#endif
#endif
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"Theta"
);
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"Theta"
);
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
library
);
data_type
,
ctx
.
GetPlace
(),
phi
::
DataLayout
::
kAnyLayout
,
library
);
}
}
};
};
...
@@ -261,7 +261,7 @@ class AffineGridOpGrad : public framework::OperatorWithKernel {
...
@@ -261,7 +261,7 @@ class AffineGridOpGrad : public framework::OperatorWithKernel {
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
framework
::
GradVarName
(
"Output"
)),
ctx
,
framework
::
GradVarName
(
"Output"
)),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
library_
);
library_
);
}
}
};
};
...
...
paddle/fluid/operators/batch_norm_op.cc
浏览文件 @
ec749398
...
@@ -76,8 +76,8 @@ void BatchNormOp::InferShape(framework::InferShapeContext *ctx) const {
...
@@ -76,8 +76,8 @@ void BatchNormOp::InferShape(framework::InferShapeContext *ctx) const {
x_dims
));
x_dims
));
}
}
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
if
(
ctx
->
IsRuntime
()
&&
ctx
->
HasInput
(
"MomentumTensor"
))
{
if
(
ctx
->
IsRuntime
()
&&
ctx
->
HasInput
(
"MomentumTensor"
))
{
auto
mom
=
ctx
->
Inputs
(
"MomentumTensor"
);
auto
mom
=
ctx
->
Inputs
(
"MomentumTensor"
);
...
@@ -208,15 +208,15 @@ framework::OpKernelType BatchNormOp::GetKernelTypeForVar(
...
@@ -208,15 +208,15 @@ framework::OpKernelType BatchNormOp::GetKernelTypeForVar(
// Only input require reshaping, weights and
// Only input require reshaping, weights and
// bias are having shape in NCHW order
// bias are having shape in NCHW order
if
((
var_name
==
"X"
)
&&
if
((
var_name
==
"X"
)
&&
(
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
(
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_layout
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_layout
);
auto
dl
=
phi
::
StringToDataLayout
(
data_layout
);
// Some models may have intentionally set "AnyLayout" for pool
// Some models may have intentionally set "AnyLayout" for pool
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
@@ -350,8 +350,8 @@ void BatchNormGradOp::InferShape(framework::InferShapeContext *ctx) const {
...
@@ -350,8 +350,8 @@ void BatchNormGradOp::InferShape(framework::InferShapeContext *ctx) const {
OP_INOUT_CHECK
(
ctx
->
HasInput
(
"X"
),
"Input"
,
"X"
,
"BatchNormGrad"
);
OP_INOUT_CHECK
(
ctx
->
HasInput
(
"X"
),
"Input"
,
"X"
,
"BatchNormGrad"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
const
int
C
=
const
int
C
=
((
ctx
->
IsRunMKLDNNKernel
()
==
true
)
||
(
data_layout
==
DataLayout
::
kNCHW
)
((
ctx
->
IsRunMKLDNNKernel
()
==
true
)
||
(
data_layout
==
DataLayout
::
kNCHW
)
...
@@ -398,15 +398,15 @@ framework::OpKernelType BatchNormGradOp::GetKernelTypeForVar(
...
@@ -398,15 +398,15 @@ framework::OpKernelType BatchNormGradOp::GetKernelTypeForVar(
// Only input require reshaping, weights and
// Only input require reshaping, weights and
// bias are having shape in NCHW order
// bias are having shape in NCHW order
if
(((
var_name
==
"X"
)
||
(
var_name
==
framework
::
GradVarName
(
"Y"
)))
&&
if
(((
var_name
==
"X"
)
||
(
var_name
==
framework
::
GradVarName
(
"Y"
)))
&&
(
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
(
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_layout
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_layout
);
auto
dl
=
phi
::
StringToDataLayout
(
data_layout
);
// Some models may have intentionally set "AnyLayout" for pool
// Some models may have intentionally set "AnyLayout" for pool
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
@@ -492,8 +492,8 @@ void BatchNormDoubleGradOp::InferShape(
...
@@ -492,8 +492,8 @@ void BatchNormDoubleGradOp::InferShape(
OP_INOUT_CHECK
(
ctx
->
HasOutput
(
"DX"
),
"Output"
,
"DX"
,
"BatchNormDoubleGrad"
);
OP_INOUT_CHECK
(
ctx
->
HasOutput
(
"DX"
),
"Output"
,
"DX"
,
"BatchNormDoubleGrad"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
const
int
C
=
const
int
C
=
((
ctx
->
IsRunMKLDNNKernel
()
==
true
)
||
(
data_layout
==
DataLayout
::
kNCHW
)
((
ctx
->
IsRunMKLDNNKernel
()
==
true
)
||
(
data_layout
==
DataLayout
::
kNCHW
)
?
x_dims
[
1
]
?
x_dims
[
1
]
...
...
paddle/fluid/operators/batch_norm_op.cu
浏览文件 @
ec749398
...
@@ -35,7 +35,7 @@ namespace paddle {
...
@@ -35,7 +35,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
using
CudnnDataType
=
platform
::
CudnnDataType
<
T
>
;
using
CudnnDataType
=
platform
::
CudnnDataType
<
T
>
;
template
<
typename
T
>
template
<
typename
T
>
...
...
paddle/fluid/operators/batch_norm_op.h
浏览文件 @
ec749398
...
@@ -29,7 +29,7 @@ namespace operators {
...
@@ -29,7 +29,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
using
EigenArrayMap
=
using
EigenArrayMap
=
...
...
paddle/fluid/operators/batch_norm_op_mlu.cc
浏览文件 @
ec749398
...
@@ -36,7 +36,7 @@ class MLUBatchNormOpKernel : public framework::OpKernel<T> {
...
@@ -36,7 +36,7 @@ class MLUBatchNormOpKernel : public framework::OpKernel<T> {
bool
global_stats
=
test_mode
||
use_global_stats
;
bool
global_stats
=
test_mode
||
use_global_stats
;
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
const
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
auto
&
x_dims
=
x
->
dims
();
const
auto
&
x_dims
=
x
->
dims
();
...
@@ -173,7 +173,7 @@ class MLUBatchNormGradOpKernel : public framework::OpKernel<T> {
...
@@ -173,7 +173,7 @@ class MLUBatchNormGradOpKernel : public framework::OpKernel<T> {
bool
use_global_stats
=
ctx
.
Attr
<
bool
>
(
"use_global_stats"
);
bool
use_global_stats
=
ctx
.
Attr
<
bool
>
(
"use_global_stats"
);
const
bool
is_test
=
ctx
.
Attr
<
bool
>
(
"is_test"
);
const
bool
is_test
=
ctx
.
Attr
<
bool
>
(
"is_test"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
auto
*
d_x
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
d_x
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
d_scale
=
auto
*
d_scale
=
...
...
paddle/fluid/operators/batch_norm_op_npu.cc
浏览文件 @
ec749398
...
@@ -34,7 +34,7 @@ class NPUBatchNormOpKernel : public framework::OpKernel<T> {
...
@@ -34,7 +34,7 @@ class NPUBatchNormOpKernel : public framework::OpKernel<T> {
bool
training
=
!
test_mode
&&
!
use_global_stats
;
bool
training
=
!
test_mode
&&
!
use_global_stats
;
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
const
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
auto
&
x_dims
=
x
->
dims
();
const
auto
&
x_dims
=
x
->
dims
();
...
@@ -149,7 +149,7 @@ class NPUBatchNormGradOpKernel : public framework::OpKernel<T> {
...
@@ -149,7 +149,7 @@ class NPUBatchNormGradOpKernel : public framework::OpKernel<T> {
bool
use_global_stats
=
ctx
.
Attr
<
bool
>
(
"use_global_stats"
);
bool
use_global_stats
=
ctx
.
Attr
<
bool
>
(
"use_global_stats"
);
const
bool
is_test
=
ctx
.
Attr
<
bool
>
(
"is_test"
);
const
bool
is_test
=
ctx
.
Attr
<
bool
>
(
"is_test"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
auto
*
d_x
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
d_x
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
auto
*
d_scale
=
auto
*
d_scale
=
...
...
paddle/fluid/operators/bilateral_slice_op.cc
浏览文件 @
ec749398
...
@@ -20,7 +20,7 @@
...
@@ -20,7 +20,7 @@
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
class
BilateralSliceOp
:
public
framework
::
OperatorWithKernel
{
class
BilateralSliceOp
:
public
framework
::
OperatorWithKernel
{
public:
public:
...
...
paddle/fluid/operators/bilateral_slice_op.cu
浏览文件 @
ec749398
...
@@ -19,7 +19,7 @@
...
@@ -19,7 +19,7 @@
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
__device__
T
DiffAbs
(
T
x
)
{
__device__
T
DiffAbs
(
T
x
)
{
...
...
paddle/fluid/operators/cast_op.cc
浏览文件 @
ec749398
...
@@ -112,7 +112,7 @@ class CastOp : public framework::OperatorWithKernel {
...
@@ -112,7 +112,7 @@ class CastOp : public framework::OperatorWithKernel {
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
framework
::
TransToProtoVarType
(
tensor
->
dtype
()),
framework
::
TransToProtoVarType
(
tensor
->
dtype
()),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
);
framework
::
LibraryType
::
kMKLDNN
);
}
}
#endif
#endif
...
...
paddle/fluid/operators/controlflow/fetch_op.cc
浏览文件 @
ec749398
...
@@ -29,7 +29,7 @@ static void DataCopy(const phi::DenseTensor &src_item,
...
@@ -29,7 +29,7 @@ static void DataCopy(const phi::DenseTensor &src_item,
if
(
src_item
.
IsInitialized
()
&&
src_item
.
numel
()
>
0
)
{
if
(
src_item
.
IsInitialized
()
&&
src_item
.
numel
()
>
0
)
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
// Conversion from MKL-DNN to Paddle
// Conversion from MKL-DNN to Paddle
if
(
src_item
.
layout
()
==
framework
::
DataLayout
::
kMKLDNN
)
{
if
(
src_item
.
layout
()
==
phi
::
DataLayout
::
kMKLDNN
)
{
phi
::
DenseTensor
out
;
phi
::
DenseTensor
out
;
// Convert to desired Paddle layout, apart from grads of filter
// Convert to desired Paddle layout, apart from grads of filter
// as params are not a subject to paddle's data_format
// as params are not a subject to paddle's data_format
...
@@ -37,7 +37,7 @@ static void DataCopy(const phi::DenseTensor &src_item,
...
@@ -37,7 +37,7 @@ static void DataCopy(const phi::DenseTensor &src_item,
framework
::
innerTransDataLayoutFromMKLDNN
(
framework
::
innerTransDataLayoutFromMKLDNN
(
src_item
.
layout
(),
src_item
.
layout
(),
fetch_var_name
==
framework
::
GradVarName
(
"Filter"
)
fetch_var_name
==
framework
::
GradVarName
(
"Filter"
)
?
framework
::
DataLayout
::
kNCHW
?
phi
::
DataLayout
::
kNCHW
:
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
:
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
(),
.
get_cur_paddle_data_layout
(),
src_item
,
src_item
,
...
...
paddle/fluid/operators/controlflow/fetch_v2_op.cc
浏览文件 @
ec749398
...
@@ -37,14 +37,14 @@ static void DeepCopy(const phi::DenseTensor &src_item,
...
@@ -37,14 +37,14 @@ static void DeepCopy(const phi::DenseTensor &src_item,
if
(
src_item
.
IsInitialized
()
&&
src_item
.
numel
()
>
0
)
{
if
(
src_item
.
IsInitialized
()
&&
src_item
.
numel
()
>
0
)
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
// Conversion from MKL-DNN to Paddle
// Conversion from MKL-DNN to Paddle
if
(
src_item
.
layout
()
==
framework
::
DataLayout
::
kMKLDNN
)
{
if
(
src_item
.
layout
()
==
phi
::
DataLayout
::
kMKLDNN
)
{
phi
::
DenseTensor
out
;
phi
::
DenseTensor
out
;
// Convert to desired Paddle layout, apart from grads of filter
// Convert to desired Paddle layout, apart from grads of filter
// as params are not a subject to paddle's data_format
// as params are not a subject to paddle's data_format
framework
::
innerTransDataLayoutFromMKLDNN
(
framework
::
innerTransDataLayoutFromMKLDNN
(
src_item
.
layout
(),
src_item
.
layout
(),
fetch_var_name
==
framework
::
GradVarName
(
"Filter"
)
fetch_var_name
==
framework
::
GradVarName
(
"Filter"
)
?
framework
::
DataLayout
::
kNCHW
?
phi
::
DataLayout
::
kNCHW
:
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
:
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
(),
.
get_cur_paddle_data_layout
(),
src_item
,
src_item
,
...
...
paddle/fluid/operators/conv_op.cc
浏览文件 @
ec749398
...
@@ -222,7 +222,7 @@ framework::OpKernelType ConvOp::GetExpectedKernelType(
...
@@ -222,7 +222,7 @@ framework::OpKernelType ConvOp::GetExpectedKernelType(
#endif // PADDLE_WITH_CUDA
#endif // PADDLE_WITH_CUDA
return
framework
::
OpKernelType
(
input_data_type
,
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
framework
::
LibraryType
::
kCUDNN
);
framework
::
LibraryType
::
kCUDNN
);
}
}
#endif // PADDLE_WITH_CUDA || PADDLE_WITH_HIP
#endif // PADDLE_WITH_CUDA || PADDLE_WITH_HIP
...
@@ -231,7 +231,7 @@ framework::OpKernelType ConvOp::GetExpectedKernelType(
...
@@ -231,7 +231,7 @@ framework::OpKernelType ConvOp::GetExpectedKernelType(
if
(
this
->
CanMKLDNNBeUsed
(
ctx
,
input_data_type
))
{
if
(
this
->
CanMKLDNNBeUsed
(
ctx
,
input_data_type
))
{
return
framework
::
OpKernelType
(
input_data_type
,
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
);
framework
::
LibraryType
::
kMKLDNN
);
}
}
#endif
#endif
...
@@ -247,15 +247,15 @@ framework::OpKernelType ConvOp::GetKernelTypeForVar(
...
@@ -247,15 +247,15 @@ framework::OpKernelType ConvOp::GetKernelTypeForVar(
// Only input require reshaping, weights and
// Only input require reshaping, weights and
// bias are having shape in NCHW order
// bias are having shape in NCHW order
if
((
var_name
==
"Input"
)
&&
if
((
var_name
==
"Input"
)
&&
(
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
(
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for conv
// Some models may have intentionally set "AnyLayout" for conv
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
@@ -490,7 +490,7 @@ framework::OpKernelType ConvOpGrad::GetExpectedKernelType(
...
@@ -490,7 +490,7 @@ framework::OpKernelType ConvOpGrad::GetExpectedKernelType(
if
(
platform
::
CanCUDNNBeUsed
(
ctx
))
{
if
(
platform
::
CanCUDNNBeUsed
(
ctx
))
{
return
framework
::
OpKernelType
(
data_type
,
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
framework
::
LibraryType
::
kCUDNN
);
framework
::
LibraryType
::
kCUDNN
);
}
}
#endif
#endif
...
@@ -498,7 +498,7 @@ framework::OpKernelType ConvOpGrad::GetExpectedKernelType(
...
@@ -498,7 +498,7 @@ framework::OpKernelType ConvOpGrad::GetExpectedKernelType(
if
(
this
->
CanMKLDNNBeUsed
(
ctx
,
data_type
))
{
if
(
this
->
CanMKLDNNBeUsed
(
ctx
,
data_type
))
{
return
framework
::
OpKernelType
(
data_type
,
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
);
framework
::
LibraryType
::
kMKLDNN
);
}
}
#endif
#endif
...
@@ -515,15 +515,15 @@ framework::OpKernelType ConvOpGrad::GetKernelTypeForVar(
...
@@ -515,15 +515,15 @@ framework::OpKernelType ConvOpGrad::GetKernelTypeForVar(
// bias are having shape in NCHW order
// bias are having shape in NCHW order
if
(((
var_name
==
"Input"
)
||
if
(((
var_name
==
"Input"
)
||
(
var_name
==
framework
::
GradVarName
(
"Output"
)))
&&
(
var_name
==
framework
::
GradVarName
(
"Output"
)))
&&
(
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
(
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for pool
// Some models may have intentionally set "AnyLayout" for pool
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
@@ -677,7 +677,7 @@ framework::OpKernelType ConvOpDoubleGrad::GetExpectedKernelType(
...
@@ -677,7 +677,7 @@ framework::OpKernelType ConvOpDoubleGrad::GetExpectedKernelType(
framework
::
OpKernelType
::
kDefaultCustomizedTypeValue
;
framework
::
OpKernelType
::
kDefaultCustomizedTypeValue
;
framework
::
LibraryType
library_
{
framework
::
LibraryType
::
kPlain
};
framework
::
LibraryType
library_
{
framework
::
LibraryType
::
kPlain
};
std
::
string
data_format
=
"AnyLayout"
;
std
::
string
data_format
=
"AnyLayout"
;
framework
::
DataLayout
layout_
=
framework
::
StringToDataLayout
(
data_format
);
phi
::
DataLayout
layout_
=
phi
::
StringToDataLayout
(
data_format
);
#if defined(PADDLE_WITH_CUDA) || defined(PADDLE_WITH_HIP)
#if defined(PADDLE_WITH_CUDA) || defined(PADDLE_WITH_HIP)
if
(
platform
::
CanCUDNNBeUsed
(
ctx
))
{
if
(
platform
::
CanCUDNNBeUsed
(
ctx
))
{
...
...
paddle/fluid/operators/conv_op_mlu.cc
浏览文件 @
ec749398
...
@@ -19,7 +19,7 @@ namespace paddle {
...
@@ -19,7 +19,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
class
MLUConvOpKernel
:
public
framework
::
OpKernel
<
T
>
{
class
MLUConvOpKernel
:
public
framework
::
OpKernel
<
T
>
{
...
...
paddle/fluid/operators/conv_transpose_op.cc
浏览文件 @
ec749398
...
@@ -32,7 +32,7 @@ limitations under the License. */
...
@@ -32,7 +32,7 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
framework
::
OpKernelType
ConvTransposeOp
::
GetExpectedKernelType
(
framework
::
OpKernelType
ConvTransposeOp
::
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
{
const
framework
::
ExecutionContext
&
ctx
)
const
{
...
@@ -44,7 +44,7 @@ framework::OpKernelType ConvTransposeOp::GetExpectedKernelType(
...
@@ -44,7 +44,7 @@ framework::OpKernelType ConvTransposeOp::GetExpectedKernelType(
dev_ctx
.
cudnn_handle
()
!=
nullptr
)
{
dev_ctx
.
cudnn_handle
()
!=
nullptr
)
{
return
framework
::
OpKernelType
(
data_type
,
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
framework
::
LibraryType
::
kCUDNN
);
framework
::
LibraryType
::
kCUDNN
);
}
}
}
}
...
@@ -60,15 +60,15 @@ framework::OpKernelType ConvTransposeOp::GetKernelTypeForVar(
...
@@ -60,15 +60,15 @@ framework::OpKernelType ConvTransposeOp::GetKernelTypeForVar(
// Only input require reshaping, weights and
// Only input require reshaping, weights and
// bias are having shape in NCHW order
// bias are having shape in NCHW order
if
((
var_name
==
"Input"
)
&&
if
((
var_name
==
"Input"
)
&&
(
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
(
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for pool
// Some models may have intentionally set "AnyLayout" for pool
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
@@ -284,7 +284,7 @@ framework::OpKernelType ConvTransposeOpGrad::GetExpectedKernelType(
...
@@ -284,7 +284,7 @@ framework::OpKernelType ConvTransposeOpGrad::GetExpectedKernelType(
library_
=
framework
::
LibraryType
::
kPlain
;
library_
=
framework
::
LibraryType
::
kPlain
;
}
}
framework
::
DataLayout
layout_
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout_
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"Input"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"Input"
),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
...
@@ -371,7 +371,7 @@ framework::OpKernelType ConvTransposeOpDoubleGrad::GetExpectedKernelType(
...
@@ -371,7 +371,7 @@ framework::OpKernelType ConvTransposeOpDoubleGrad::GetExpectedKernelType(
library_
=
framework
::
LibraryType
::
kPlain
;
library_
=
framework
::
LibraryType
::
kPlain
;
}
}
framework
::
DataLayout
layout_
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout_
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"Input"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"Input"
),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
...
...
paddle/fluid/operators/conv_transpose_op_mlu.cc
浏览文件 @
ec749398
...
@@ -21,7 +21,7 @@ namespace paddle {
...
@@ -21,7 +21,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
class
Conv2DTransposeMLUKernel
:
public
framework
::
OpKernel
<
T
>
{
class
Conv2DTransposeMLUKernel
:
public
framework
::
OpKernel
<
T
>
{
...
@@ -148,14 +148,13 @@ class Conv2DTransposeGradMLUKernel : public framework::OpKernel<T> {
...
@@ -148,14 +148,13 @@ class Conv2DTransposeGradMLUKernel : public framework::OpKernel<T> {
const
int
groups
=
ctx
.
Attr
<
int
>
(
"groups"
);
const
int
groups
=
ctx
.
Attr
<
int
>
(
"groups"
);
std
::
string
padding_algorithm
=
ctx
.
Attr
<
std
::
string
>
(
"padding_algorithm"
);
std
::
string
padding_algorithm
=
ctx
.
Attr
<
std
::
string
>
(
"padding_algorithm"
);
const
std
::
string
data_format
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
framework
::
DataLayout
data_layout
=
const
phi
::
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_format
);
framework
::
StringToDataLayout
(
data_format
);
auto
in_dims
=
input
->
dims
();
auto
in_dims
=
input
->
dims
();
auto
filter_dims
=
filter
->
dims
();
auto
filter_dims
=
filter
->
dims
();
auto
in_dims_size
=
in_dims
.
size
();
auto
in_dims_size
=
in_dims
.
size
();
const
bool
channel_last
=
(
data_layout
==
framework
::
DataLayout
::
kNHWC
);
const
bool
channel_last
=
(
data_layout
==
phi
::
DataLayout
::
kNHWC
);
framework
::
DDim
in_data_dims
;
framework
::
DDim
in_data_dims
;
if
(
channel_last
)
{
if
(
channel_last
)
{
...
...
paddle/fluid/operators/conv_transpose_op_npu.cc
浏览文件 @
ec749398
...
@@ -124,15 +124,14 @@ class Conv2DTransposeGradNPUKernel : public framework::OpKernel<T> {
...
@@ -124,15 +124,14 @@ class Conv2DTransposeGradNPUKernel : public framework::OpKernel<T> {
const
int
groups
=
ctx
.
Attr
<
int
>
(
"groups"
);
const
int
groups
=
ctx
.
Attr
<
int
>
(
"groups"
);
std
::
string
padding_algorithm
=
ctx
.
Attr
<
std
::
string
>
(
"padding_algorithm"
);
std
::
string
padding_algorithm
=
ctx
.
Attr
<
std
::
string
>
(
"padding_algorithm"
);
const
std
::
string
data_format
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
framework
::
DataLayout
data_layout
=
const
phi
::
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_format
);
framework
::
StringToDataLayout
(
data_format
);
auto
in_dims
=
input
->
dims
();
auto
in_dims
=
input
->
dims
();
auto
filter_dims
=
filter
->
dims
();
auto
filter_dims
=
filter
->
dims
();
// auto out_grad_dims = output_grad->dims();
// auto out_grad_dims = output_grad->dims();
// const int batch_size = static_cast<int>(input->dims()[0]);
// const int batch_size = static_cast<int>(input->dims()[0]);
const
bool
channel_last
=
(
data_layout
==
framework
::
DataLayout
::
kNHWC
);
const
bool
channel_last
=
(
data_layout
==
phi
::
DataLayout
::
kNHWC
);
framework
::
DDim
in_data_dims
;
framework
::
DDim
in_data_dims
;
if
(
channel_last
)
{
if
(
channel_last
)
{
...
...
paddle/fluid/operators/data_norm_op.cc
浏览文件 @
ec749398
...
@@ -25,7 +25,7 @@ namespace operators {
...
@@ -25,7 +25,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
using
EigenArrayMap
=
using
EigenArrayMap
=
...
@@ -68,8 +68,8 @@ class DataNormOp : public framework::OperatorWithKernel {
...
@@ -68,8 +68,8 @@ class DataNormOp : public framework::OperatorWithKernel {
}
}
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
PADDLE_ENFORCE_EQ
(
x_dims
.
size
()
>=
2
&&
x_dims
.
size
()
<=
5
,
PADDLE_ENFORCE_EQ
(
x_dims
.
size
()
>=
2
&&
x_dims
.
size
()
<=
5
,
true
,
true
,
...
@@ -274,8 +274,7 @@ class DataNormKernel<phi::CPUContext, T> : public framework::OpKernel<T> {
...
@@ -274,8 +274,7 @@ class DataNormKernel<phi::CPUContext, T> : public framework::OpKernel<T> {
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
// const bool is_test = ctx.Attr<bool>("is_test");
// const bool is_test = ctx.Attr<bool>("is_test");
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
auto
&
x_dims
=
x
->
dims
();
const
auto
&
x_dims
=
x
->
dims
();
...
@@ -445,8 +444,8 @@ class DataNormGradOp : public framework::OperatorWithKernel {
...
@@ -445,8 +444,8 @@ class DataNormGradOp : public framework::OperatorWithKernel {
"DataNormGrad"
);
"DataNormGrad"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
auto
x_dims
=
ctx
->
GetInputDim
(
"X"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
const
int
C
=
const
int
C
=
(
data_layout
==
DataLayout
::
kNCHW
?
x_dims
[
1
]
(
data_layout
==
DataLayout
::
kNCHW
?
x_dims
[
1
]
:
x_dims
[
x_dims
.
size
()
-
1
]);
:
x_dims
[
x_dims
.
size
()
-
1
]);
...
@@ -511,8 +510,7 @@ class DataNormGradKernel<phi::CPUContext, T> : public framework::OpKernel<T> {
...
@@ -511,8 +510,7 @@ class DataNormGradKernel<phi::CPUContext, T> : public framework::OpKernel<T> {
const
auto
*
means
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Means"
);
const
auto
*
means
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Means"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
// Get the size for each dimension.
// Get the size for each dimension.
// NCHW [batch_size, in_channels, in_height, in_width]
// NCHW [batch_size, in_channels, in_height, in_width]
...
...
paddle/fluid/operators/data_norm_op.cu
浏览文件 @
ec749398
...
@@ -28,7 +28,7 @@ namespace operators {
...
@@ -28,7 +28,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
using
platform
::
PADDLE_CUDA_NUM_THREADS
;
using
platform
::
PADDLE_CUDA_NUM_THREADS
;
inline
int
GET_BLOCKS
(
const
int
N
)
{
inline
int
GET_BLOCKS
(
const
int
N
)
{
...
...
paddle/fluid/operators/dequantize_op.cc
浏览文件 @
ec749398
...
@@ -26,7 +26,7 @@ framework::OpKernelType DeQuantOp::GetExpectedKernelType(
...
@@ -26,7 +26,7 @@ framework::OpKernelType DeQuantOp::GetExpectedKernelType(
return
framework
::
OpKernelType
(
input_data_type
,
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
);
framework
::
LibraryType
::
kMKLDNN
);
}
}
...
...
paddle/fluid/operators/detection/prior_box_op.cc
浏览文件 @
ec749398
...
@@ -49,7 +49,7 @@ class PriorBoxOp : public framework::OperatorWithKernel {
...
@@ -49,7 +49,7 @@ class PriorBoxOp : public framework::OperatorWithKernel {
}
}
return
framework
::
OpKernelType
(
input_input_type
,
return
framework
::
OpKernelType
(
input_input_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
,
customized_type_value
);
customized_type_value
);
}
}
...
...
paddle/fluid/operators/elementwise/elementwise_op.h
浏览文件 @
ec749398
...
@@ -115,7 +115,7 @@ class ElementwiseOp : public framework::OperatorWithKernel {
...
@@ -115,7 +115,7 @@ class ElementwiseOp : public framework::OperatorWithKernel {
bool
should_rotate
=
bool
should_rotate
=
ctx
->
IsRunMKLDNNKernel
()
&&
ctx
->
IsRunMKLDNNKernel
()
&&
(
platform
::
MKLDNNDeviceContext
::
tls
().
get_cur_paddle_data_layout
()
==
(
platform
::
MKLDNNDeviceContext
::
tls
().
get_cur_paddle_data_layout
()
==
framework
::
DataLayout
::
kNHWC
)
&&
phi
::
DataLayout
::
kNHWC
)
&&
(
x_dims
.
size
()
>=
3
||
y_dims
.
size
()
>=
3
);
(
x_dims
.
size
()
>=
3
||
y_dims
.
size
()
>=
3
);
if
(
should_rotate
)
{
if
(
should_rotate
)
{
// Pick bigger shape and rotate this one
// Pick bigger shape and rotate this one
...
@@ -174,15 +174,13 @@ class ElementwiseOp : public framework::OperatorWithKernel {
...
@@ -174,15 +174,13 @@ class ElementwiseOp : public framework::OperatorWithKernel {
// When elementwise is first oneDNN op (there was some non oneDNN op
// When elementwise is first oneDNN op (there was some non oneDNN op
// previously)
// previously)
// then we also need to rotate shape NHWC -> NCWH
// then we also need to rotate shape NHWC -> NCWH
if
((
expected_kernel_type
.
data_layout_
==
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
framework
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
)
&&
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
()
==
.
get_cur_paddle_data_layout
()
==
phi
::
DataLayout
::
kNHWC
)
{
framework
::
DataLayout
::
kNHWC
)
{
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
tensor
.
place
(),
framework
::
DataLayout
::
kNHWC
);
phi
::
DataLayout
::
kNHWC
);
}
}
#endif
#endif
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
...
...
paddle/fluid/operators/elementwise/mkldnn/elementwise_mkldnn_op.h
浏览文件 @
ec749398
...
@@ -27,7 +27,7 @@ namespace operators {
...
@@ -27,7 +27,7 @@ namespace operators {
using
dnnl
::
memory
;
using
dnnl
::
memory
;
using
dnnl
::
primitive
;
using
dnnl
::
primitive
;
using
dnnl
::
stream
;
using
dnnl
::
stream
;
using
framework
::
DataLayout
;
using
phi
::
DataLayout
;
inline
std
::
vector
<
int64_t
>
CalculateBroadcastedDims
(
inline
std
::
vector
<
int64_t
>
CalculateBroadcastedDims
(
const
phi
::
DenseTensor
*
x
,
const
phi
::
DenseTensor
*
y
)
{
const
phi
::
DenseTensor
*
x
,
const
phi
::
DenseTensor
*
y
)
{
...
...
paddle/fluid/operators/fc_op.cc
浏览文件 @
ec749398
...
@@ -136,7 +136,7 @@ class FCOp : public framework::OperatorWithKernel {
...
@@ -136,7 +136,7 @@ class FCOp : public framework::OperatorWithKernel {
:
kFCMKLDNNFP32
;
:
kFCMKLDNNFP32
;
return
framework
::
OpKernelType
(
input_data_type
,
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
,
customized_type_value
);
customized_type_value
);
}
}
...
...
paddle/fluid/operators/fsp_op.cc
浏览文件 @
ec749398
...
@@ -68,7 +68,7 @@ class FSPOp : public framework::OperatorWithKernel {
...
@@ -68,7 +68,7 @@ class FSPOp : public framework::OperatorWithKernel {
framework
::
OpKernelType
GetExpectedKernelType
(
framework
::
OpKernelType
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
framework
::
LibraryType
library_
{
framework
::
LibraryType
::
kPlain
};
framework
::
LibraryType
library_
{
framework
::
LibraryType
::
kPlain
};
framework
::
DataLayout
layout_
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout_
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
ctx
.
device_context
(),
ctx
.
device_context
(),
...
...
paddle/fluid/operators/fused/fused_bn_activation_op.cc
浏览文件 @
ec749398
...
@@ -190,7 +190,7 @@ framework::OpKernelType FusedBatchNormActOp::GetExpectedKernelType(
...
@@ -190,7 +190,7 @@ framework::OpKernelType FusedBatchNormActOp::GetExpectedKernelType(
"Variance input should be of float type"
));
"Variance input should be of float type"
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
...
@@ -326,7 +326,7 @@ framework::OpKernelType FusedBatchNormActGradOp::GetExpectedKernelType(
...
@@ -326,7 +326,7 @@ framework::OpKernelType FusedBatchNormActGradOp::GetExpectedKernelType(
}
}
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
...
...
paddle/fluid/operators/fused/fused_bn_add_activation_op.cc
浏览文件 @
ec749398
...
@@ -155,7 +155,7 @@ framework::OpKernelType FusedBatchNormAddActOp::GetExpectedKernelType(
...
@@ -155,7 +155,7 @@ framework::OpKernelType FusedBatchNormAddActOp::GetExpectedKernelType(
platform
::
errors
::
InvalidArgument
(
"Bias input should be of float type"
));
platform
::
errors
::
InvalidArgument
(
"Bias input should be of float type"
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
...
@@ -276,7 +276,7 @@ framework::OpKernelType FusedBatchNormAddActGradOp::GetExpectedKernelType(
...
@@ -276,7 +276,7 @@ framework::OpKernelType FusedBatchNormAddActGradOp::GetExpectedKernelType(
}
}
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
...
...
paddle/fluid/operators/fused/fused_gemm_epilogue_op.cc
浏览文件 @
ec749398
...
@@ -149,7 +149,7 @@ class FusedGemmEpilogueOp : public framework::OperatorWithKernel {
...
@@ -149,7 +149,7 @@ class FusedGemmEpilogueOp : public framework::OperatorWithKernel {
framework
::
OpKernelType
GetExpectedKernelType
(
framework
::
OpKernelType
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
{
const
framework
::
ExecutionContext
&
ctx
)
const
{
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
}
}
...
@@ -323,7 +323,7 @@ class FusedGemmEpilogueGradOp : public framework::OperatorWithKernel {
...
@@ -323,7 +323,7 @@ class FusedGemmEpilogueGradOp : public framework::OperatorWithKernel {
framework
::
OpKernelType
GetExpectedKernelType
(
framework
::
OpKernelType
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
{
const
framework
::
ExecutionContext
&
ctx
)
const
{
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"DOut"
);
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"DOut"
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
}
}
...
...
paddle/fluid/operators/fused/multi_gru_op.cc
浏览文件 @
ec749398
...
@@ -143,7 +143,7 @@ framework::OpKernelType MultiGRUOp::GetExpectedKernelType(
...
@@ -143,7 +143,7 @@ framework::OpKernelType MultiGRUOp::GetExpectedKernelType(
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
);
framework
::
LibraryType
::
kMKLDNN
);
}
}
...
...
paddle/fluid/operators/fused/resnet_basic_block_op.cc
浏览文件 @
ec749398
...
@@ -249,7 +249,7 @@ class ResNetBasicBlockOp : public framework::OperatorWithKernel {
...
@@ -249,7 +249,7 @@ class ResNetBasicBlockOp : public framework::OperatorWithKernel {
"Bias input should be of float type"
));
"Bias input should be of float type"
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
}
}
...
@@ -554,7 +554,7 @@ class ResNetBasicBlockGradOp : public framework::OperatorWithKernel {
...
@@ -554,7 +554,7 @@ class ResNetBasicBlockGradOp : public framework::OperatorWithKernel {
"Can not find Y@GRAD in the execution context."
));
"Can not find Y@GRAD in the execution context."
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
...
...
paddle/fluid/operators/fused/resnet_unit_op.cc
浏览文件 @
ec749398
...
@@ -220,7 +220,7 @@ class ResNetUnitOp : public framework::OperatorWithKernel {
...
@@ -220,7 +220,7 @@ class ResNetUnitOp : public framework::OperatorWithKernel {
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"Bias input should be of float type"
));
"Bias input should be of float type"
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
}
}
...
@@ -402,7 +402,7 @@ class ResNetUnitGradOp : public framework::OperatorWithKernel {
...
@@ -402,7 +402,7 @@ class ResNetUnitGradOp : public framework::OperatorWithKernel {
"Can not find Y@GRAD in the execution context."
));
"Can not find Y@GRAD in the execution context."
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
...
...
paddle/fluid/operators/grid_sampler_op.cc
浏览文件 @
ec749398
...
@@ -44,7 +44,7 @@ class GridSampleOp : public framework::OperatorWithKernel {
...
@@ -44,7 +44,7 @@ class GridSampleOp : public framework::OperatorWithKernel {
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
library_
);
library_
);
}
}
};
};
...
@@ -155,7 +155,7 @@ class GridSampleOpGrad : public framework::OperatorWithKernel {
...
@@ -155,7 +155,7 @@ class GridSampleOpGrad : public framework::OperatorWithKernel {
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kAnyLayout
,
phi
::
DataLayout
::
kAnyLayout
,
library_
);
library_
);
}
}
};
};
...
...
paddle/fluid/operators/grid_sampler_op_mlu.cc
浏览文件 @
ec749398
...
@@ -47,8 +47,7 @@ class GridSamplerMLUKernel : public framework::OpKernel<T> {
...
@@ -47,8 +47,7 @@ class GridSamplerMLUKernel : public framework::OpKernel<T> {
const
std
::
string
mode
=
ctx
.
Attr
<
std
::
string
>
(
"mode"
);
const
std
::
string
mode
=
ctx
.
Attr
<
std
::
string
>
(
"mode"
);
const
std
::
string
padding_mode
=
ctx
.
Attr
<
std
::
string
>
(
"padding_mode"
);
const
std
::
string
padding_mode
=
ctx
.
Attr
<
std
::
string
>
(
"padding_mode"
);
bool
align_corners
=
ctx
.
Attr
<
bool
>
(
"align_corners"
);
bool
align_corners
=
ctx
.
Attr
<
bool
>
(
"align_corners"
);
const
std
::
string
data_format
=
const
std
::
string
data_format
=
phi
::
DataLayoutToString
(
input
->
layout
());
paddle
::
framework
::
DataLayoutToString
(
input
->
layout
());
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
mode
==
"bilinear"
,
mode
==
"bilinear"
,
...
...
paddle/fluid/operators/group_norm_op.cc
浏览文件 @
ec749398
...
@@ -30,7 +30,7 @@ namespace operators {
...
@@ -30,7 +30,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
class
GroupNormOp
:
public
framework
::
OperatorWithKernel
{
class
GroupNormOp
:
public
framework
::
OperatorWithKernel
{
public:
public:
...
...
paddle/fluid/operators/group_norm_op.cu
浏览文件 @
ec749398
...
@@ -27,7 +27,7 @@ namespace cub = hipcub;
...
@@ -27,7 +27,7 @@ namespace cub = hipcub;
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
enum
GroupNormKernelFlags
{
kHasScale
=
1
,
kHasBias
=
2
};
enum
GroupNormKernelFlags
{
kHasScale
=
1
,
kHasBias
=
2
};
#define ALIGN_BYTES 16
#define ALIGN_BYTES 16
...
@@ -265,8 +265,7 @@ class GroupNormKernel<phi::GPUContext, T> : public framework::OpKernel<T> {
...
@@ -265,8 +265,7 @@ class GroupNormKernel<phi::GPUContext, T> : public framework::OpKernel<T> {
public:
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
auto
*
scale
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Scale"
);
auto
*
scale
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Scale"
);
auto
*
bias
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Bias"
);
auto
*
bias
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Bias"
);
...
@@ -613,8 +612,7 @@ class GroupNormGradKernel<phi::GPUContext, T> : public framework::OpKernel<T> {
...
@@ -613,8 +612,7 @@ class GroupNormGradKernel<phi::GPUContext, T> : public framework::OpKernel<T> {
public:
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
y
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Y"
);
auto
*
y
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Y"
);
...
...
paddle/fluid/operators/group_norm_op.h
浏览文件 @
ec749398
...
@@ -30,15 +30,14 @@ namespace operators {
...
@@ -30,15 +30,14 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
DeviceContext
,
typename
T
>
template
<
typename
DeviceContext
,
typename
T
>
class
GroupNormKernel
:
public
framework
::
OpKernel
<
T
>
{
class
GroupNormKernel
:
public
framework
::
OpKernel
<
T
>
{
public:
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
auto
*
scale
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Scale"
);
auto
*
scale
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Scale"
);
auto
*
bias
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Bias"
);
auto
*
bias
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Bias"
);
...
@@ -218,8 +217,7 @@ class GroupNormGradKernel : public framework::OpKernel<T> {
...
@@ -218,8 +217,7 @@ class GroupNormGradKernel : public framework::OpKernel<T> {
public:
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Y"
);
auto
*
x
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Y"
);
auto
*
var
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Variance"
);
auto
*
var
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Variance"
);
...
...
paddle/fluid/operators/group_norm_op_npu.cc
浏览文件 @
ec749398
...
@@ -138,8 +138,7 @@ class GroupNormNPUKernel : public framework::OpKernel<T> {
...
@@ -138,8 +138,7 @@ class GroupNormNPUKernel : public framework::OpKernel<T> {
public:
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
auto
*
scale
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Scale"
);
auto
*
scale
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Scale"
);
auto
*
bias
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Bias"
);
auto
*
bias
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Bias"
);
...
@@ -212,8 +211,7 @@ class GroupNormGradNPUKernel : public framework::OpKernel<T> {
...
@@ -212,8 +211,7 @@ class GroupNormGradNPUKernel : public framework::OpKernel<T> {
public:
public:
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
void
Compute
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
const
float
epsilon
=
ctx
.
Attr
<
float
>
(
"epsilon"
);
auto
*
y
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Y"
);
auto
*
y
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Y"
);
auto
*
var
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Variance"
);
auto
*
var
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"Variance"
);
...
...
paddle/fluid/operators/inplace_abn_op.cc
浏览文件 @
ec749398
...
@@ -62,7 +62,7 @@ class InplaceABNOp : public paddle::operators::BatchNormOp {
...
@@ -62,7 +62,7 @@ class InplaceABNOp : public paddle::operators::BatchNormOp {
"Variance input should be of float type"
));
"Variance input should be of float type"
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
...
@@ -118,8 +118,8 @@ class InplaceABNGradOp : public paddle::operators::BatchNormGradOp {
...
@@ -118,8 +118,8 @@ class InplaceABNGradOp : public paddle::operators::BatchNormGradOp {
OP_INOUT_CHECK
(
ctx
->
HasInput
(
"Y"
),
"Input"
,
"Y"
,
"InplaceABNGrad"
);
OP_INOUT_CHECK
(
ctx
->
HasInput
(
"Y"
),
"Input"
,
"Y"
,
"InplaceABNGrad"
);
const
auto
y_dims
=
ctx
->
GetInputDim
(
"Y"
);
const
auto
y_dims
=
ctx
->
GetInputDim
(
"Y"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
const
int
C
=
((
ctx
->
IsRunMKLDNNKernel
()
==
true
)
||
const
int
C
=
((
ctx
->
IsRunMKLDNNKernel
()
==
true
)
||
(
data_layout
==
DataLayout
::
kNCHW
)
(
data_layout
==
DataLayout
::
kNCHW
)
...
@@ -155,7 +155,7 @@ class InplaceABNGradOp : public paddle::operators::BatchNormGradOp {
...
@@ -155,7 +155,7 @@ class InplaceABNGradOp : public paddle::operators::BatchNormGradOp {
platform
::
errors
::
InvalidArgument
(
"gradient variable of Y is empty"
));
platform
::
errors
::
InvalidArgument
(
"gradient variable of Y is empty"
));
}
}
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
input_data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
...
...
paddle/fluid/operators/instance_norm_op.h
浏览文件 @
ec749398
...
@@ -24,7 +24,7 @@ namespace operators {
...
@@ -24,7 +24,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
class
InstanceNormOp
:
public
framework
::
OperatorWithKernel
{
class
InstanceNormOp
:
public
framework
::
OperatorWithKernel
{
public:
public:
...
...
paddle/fluid/operators/instance_norm_op_npu.cc
浏览文件 @
ec749398
...
@@ -60,10 +60,10 @@ class InstanceNormNPUKernel : public framework::OpKernel<T> {
...
@@ -60,10 +60,10 @@ class InstanceNormNPUKernel : public framework::OpKernel<T> {
tmp_x
.
ShareDataWith
(
*
x
);
tmp_x
.
ShareDataWith
(
*
x
);
tmp_x
.
Resize
(
phi
::
make_ddim
(
tmp_x_dims
));
tmp_x
.
Resize
(
phi
::
make_ddim
(
tmp_x_dims
));
tmp_x
.
set_layout
(
p
addle
::
framework
::
DataLayout
::
NCDHW
);
tmp_x
.
set_layout
(
p
hi
::
DataLayout
::
NCDHW
);
tmp_y
.
ShareDataWith
(
*
y
);
tmp_y
.
ShareDataWith
(
*
y
);
tmp_y
.
Resize
(
phi
::
make_ddim
(
tmp_y_dims
));
tmp_y
.
Resize
(
phi
::
make_ddim
(
tmp_y_dims
));
tmp_y
.
set_layout
(
p
addle
::
framework
::
DataLayout
::
NCDHW
);
tmp_y
.
set_layout
(
p
hi
::
DataLayout
::
NCDHW
);
NpuOpRunner
runner
;
NpuOpRunner
runner
;
...
...
paddle/fluid/operators/interpolate_op.cc
浏览文件 @
ec749398
...
@@ -23,7 +23,7 @@
...
@@ -23,7 +23,7 @@
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
static
void
Interpolate1DInferShapeCheck
(
framework
::
InferShapeContext
*
ctx
)
{
static
void
Interpolate1DInferShapeCheck
(
framework
::
InferShapeContext
*
ctx
)
{
auto
dim_x
=
ctx
->
GetInputDim
(
"X"
);
auto
dim_x
=
ctx
->
GetInputDim
(
"X"
);
...
@@ -35,8 +35,8 @@ static void Interpolate1DInferShapeCheck(framework::InferShapeContext* ctx) {
...
@@ -35,8 +35,8 @@ static void Interpolate1DInferShapeCheck(framework::InferShapeContext* ctx) {
"Interpolation method can only be
\"
linear
\"
when"
"Interpolation method can only be
\"
linear
\"
when"
"Input(X) dimension is 3, but got method = %s ."
,
"Input(X) dimension is 3, but got method = %s ."
,
interp_method
));
interp_method
));
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
if
(
ctx
->
HasInputs
(
"SizeTensor"
))
{
if
(
ctx
->
HasInputs
(
"SizeTensor"
))
{
// top prority size
// top prority size
...
@@ -124,8 +124,8 @@ static void Interpolate2DInferShapeCheck(framework::InferShapeContext* ctx) {
...
@@ -124,8 +124,8 @@ static void Interpolate2DInferShapeCheck(framework::InferShapeContext* ctx) {
"or
\"
nearest
\"
or
\"
bicubic
\"
when "
"or
\"
nearest
\"
or
\"
bicubic
\"
when "
"Input(X) dimension is 4, but got method is %s."
,
"Input(X) dimension is 4, but got method is %s."
,
interp_method
));
interp_method
));
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
if
(
ctx
->
HasInputs
(
"SizeTensor"
))
{
if
(
ctx
->
HasInputs
(
"SizeTensor"
))
{
// top prority size
// top prority size
...
@@ -219,8 +219,8 @@ static void Interpolate3DInferShapeCheck(framework::InferShapeContext* ctx) {
...
@@ -219,8 +219,8 @@ static void Interpolate3DInferShapeCheck(framework::InferShapeContext* ctx) {
"Interpolation method can only be
\"
trilinear
\"
when Input(X) "
"Interpolation method can only be
\"
trilinear
\"
when Input(X) "
"dimension is 5, but got method = %s ."
,
"dimension is 5, but got method = %s ."
,
interp_method
));
interp_method
));
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
if
(
ctx
->
HasInputs
(
"SizeTensor"
))
{
if
(
ctx
->
HasInputs
(
"SizeTensor"
))
{
// top prority size
// top prority size
...
@@ -348,15 +348,15 @@ class InterpolateOp : public framework::OperatorWithKernel {
...
@@ -348,15 +348,15 @@ class InterpolateOp : public framework::OperatorWithKernel {
const
phi
::
DenseTensor
&
tensor
,
const
phi
::
DenseTensor
&
tensor
,
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
((
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for pool
// Some models may have intentionally set "AnyLayout" for pool
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
...
paddle/fluid/operators/interpolate_op.cu
浏览文件 @
ec749398
...
@@ -19,7 +19,7 @@
...
@@ -19,7 +19,7 @@
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
__global__
void
KeNearestNeighborInterpFw
(
const
T
*
in
,
__global__
void
KeNearestNeighborInterpFw
(
const
T
*
in
,
...
@@ -917,7 +917,7 @@ static void Interpolate1DCUDAFwd(const framework::ExecutionContext& ctx,
...
@@ -917,7 +917,7 @@ static void Interpolate1DCUDAFwd(const framework::ExecutionContext& ctx,
auto
*
input_data
=
input
.
data
<
T
>
();
auto
*
input_data
=
input
.
data
<
T
>
();
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1009,7 +1009,7 @@ static void Interpolate2DCUDAFwd(const framework::ExecutionContext& ctx,
...
@@ -1009,7 +1009,7 @@ static void Interpolate2DCUDAFwd(const framework::ExecutionContext& ctx,
auto
*
input_data
=
input
.
data
<
T
>
();
auto
*
input_data
=
input
.
data
<
T
>
();
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1161,7 +1161,7 @@ static void Interpolate3DCUDAFwd(const framework::ExecutionContext& ctx,
...
@@ -1161,7 +1161,7 @@ static void Interpolate3DCUDAFwd(const framework::ExecutionContext& ctx,
auto
*
input_data
=
input
.
data
<
T
>
();
auto
*
input_data
=
input
.
data
<
T
>
();
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1292,7 +1292,7 @@ static void Interpolate1DCUDABwd(const framework::ExecutionContext& ctx,
...
@@ -1292,7 +1292,7 @@ static void Interpolate1DCUDABwd(const framework::ExecutionContext& ctx,
const
Tensor
output_grad
)
{
const
Tensor
output_grad
)
{
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1383,7 +1383,7 @@ static void Interpolate2DCUDABwd(const framework::ExecutionContext& ctx,
...
@@ -1383,7 +1383,7 @@ static void Interpolate2DCUDABwd(const framework::ExecutionContext& ctx,
const
Tensor
output_grad
)
{
const
Tensor
output_grad
)
{
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1529,7 +1529,7 @@ static void Interpolate3DCUDABwd(const framework::ExecutionContext& ctx,
...
@@ -1529,7 +1529,7 @@ static void Interpolate3DCUDABwd(const framework::ExecutionContext& ctx,
const
phi
::
DenseTensor
&
output_grad
)
{
const
phi
::
DenseTensor
&
output_grad
)
{
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
...
paddle/fluid/operators/interpolate_op.h
浏览文件 @
ec749398
...
@@ -27,7 +27,7 @@ template <typename T,
...
@@ -27,7 +27,7 @@ template <typename T,
typename
IndexType
=
Eigen
::
DenseIndex
>
typename
IndexType
=
Eigen
::
DenseIndex
>
using
EigenTensor
=
framework
::
EigenTensor
<
T
,
D
,
MajorType
,
IndexType
>
;
using
EigenTensor
=
framework
::
EigenTensor
<
T
,
D
,
MajorType
,
IndexType
>
;
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
inline
std
::
vector
<
int
>
get_new_shape
(
inline
std
::
vector
<
int
>
get_new_shape
(
const
std
::
vector
<
const
phi
::
DenseTensor
*>&
list_new_shape_tensor
)
{
const
std
::
vector
<
const
phi
::
DenseTensor
*>&
list_new_shape_tensor
)
{
...
@@ -858,7 +858,7 @@ static void Interpolate1DCPUFwd(const framework::ExecutionContext& ctx,
...
@@ -858,7 +858,7 @@ static void Interpolate1DCPUFwd(const framework::ExecutionContext& ctx,
const
phi
::
DenseTensor
&
input
,
const
phi
::
DenseTensor
&
input
,
phi
::
DenseTensor
*
output
)
{
phi
::
DenseTensor
*
output
)
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -932,7 +932,7 @@ static void Interpolate2DCPUFwd(const framework::ExecutionContext& ctx,
...
@@ -932,7 +932,7 @@ static void Interpolate2DCPUFwd(const framework::ExecutionContext& ctx,
const
phi
::
DenseTensor
&
input
,
const
phi
::
DenseTensor
&
input
,
phi
::
DenseTensor
*
output
)
{
phi
::
DenseTensor
*
output
)
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1049,7 +1049,7 @@ static void Interpolate3DCPUFwd(const framework::ExecutionContext& ctx,
...
@@ -1049,7 +1049,7 @@ static void Interpolate3DCPUFwd(const framework::ExecutionContext& ctx,
const
phi
::
DenseTensor
&
input
,
const
phi
::
DenseTensor
&
input
,
phi
::
DenseTensor
*
output
)
{
phi
::
DenseTensor
*
output
)
{
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
.
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1162,7 +1162,7 @@ static void Interpolate1DCPUBwd(const framework::ExecutionContext& ctx,
...
@@ -1162,7 +1162,7 @@ static void Interpolate1DCPUBwd(const framework::ExecutionContext& ctx,
const
phi
::
DenseTensor
&
output_grad
)
{
const
phi
::
DenseTensor
&
output_grad
)
{
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1236,7 +1236,7 @@ static void Interpolate2DCPUBwd(const framework::ExecutionContext& ctx,
...
@@ -1236,7 +1236,7 @@ static void Interpolate2DCPUBwd(const framework::ExecutionContext& ctx,
const
phi
::
DenseTensor
&
output_grad
)
{
const
phi
::
DenseTensor
&
output_grad
)
{
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -1347,7 +1347,7 @@ static void Interpolate3DCPUBwd(const framework::ExecutionContext& ctx,
...
@@ -1347,7 +1347,7 @@ static void Interpolate3DCPUBwd(const framework::ExecutionContext& ctx,
const
Tensor
output_grad
)
{
const
Tensor
output_grad
)
{
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
...
paddle/fluid/operators/interpolate_op_npu.cc
浏览文件 @
ec749398
...
@@ -21,7 +21,7 @@ limitations under the License. */
...
@@ -21,7 +21,7 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
inline
static
void
CheckArgument
(
const
framework
::
ExecutionContext
&
ctx
)
{
inline
static
void
CheckArgument
(
const
framework
::
ExecutionContext
&
ctx
)
{
const
std
::
string
interp_method
=
ctx
.
Attr
<
std
::
string
>
(
"interp_method"
);
const
std
::
string
interp_method
=
ctx
.
Attr
<
std
::
string
>
(
"interp_method"
);
...
@@ -129,8 +129,7 @@ class InterpolateNPUKernel : public framework::OpKernel<T> {
...
@@ -129,8 +129,7 @@ class InterpolateNPUKernel : public framework::OpKernel<T> {
const
std
::
string
data_layout_str
=
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
// kNCHW or kNHWC
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
// kNCHW or kNHWC
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
int32_t
n
,
c
,
h
,
w
,
out_h
,
out_w
;
int32_t
n
,
c
,
h
,
w
,
out_h
,
out_w
;
ExtractNCHW
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
h
,
&
w
);
ExtractNCHW
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
h
,
&
w
);
...
@@ -180,8 +179,7 @@ class InterpolateGradNPUKernel : public framework::OpKernel<T> {
...
@@ -180,8 +179,7 @@ class InterpolateGradNPUKernel : public framework::OpKernel<T> {
const
std
::
string
data_layout_str
=
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
// kNCHW or kNHWC
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
// kNCHW or kNHWC
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
int32_t
n
,
c
,
h
,
w
,
out_h
,
out_w
;
int32_t
n
,
c
,
h
,
w
,
out_h
,
out_w
;
ExtractNCHW
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
h
,
&
w
);
ExtractNCHW
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
h
,
&
w
);
...
...
paddle/fluid/operators/interpolate_v2_op.cc
浏览文件 @
ec749398
...
@@ -25,7 +25,7 @@
...
@@ -25,7 +25,7 @@
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
static
void
Interpolate1DInferShapeCheck
(
framework
::
InferShapeContext
*
ctx
)
{
static
void
Interpolate1DInferShapeCheck
(
framework
::
InferShapeContext
*
ctx
)
{
auto
dim_x
=
ctx
->
GetInputDim
(
"X"
);
auto
dim_x
=
ctx
->
GetInputDim
(
"X"
);
...
@@ -37,8 +37,8 @@ static void Interpolate1DInferShapeCheck(framework::InferShapeContext* ctx) {
...
@@ -37,8 +37,8 @@ static void Interpolate1DInferShapeCheck(framework::InferShapeContext* ctx) {
"Interpolation method can only be
\"
linear
\"
when"
"Interpolation method can only be
\"
linear
\"
when"
"Input(X) dimension is 3, but got method = %s ."
,
"Input(X) dimension is 3, but got method = %s ."
,
interp_method
));
interp_method
));
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
for
(
int
i
=
0
;
i
<
dim_x
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
dim_x
.
size
();
++
i
)
{
PADDLE_ENFORCE_NE
(
dim_x
[
i
],
PADDLE_ENFORCE_NE
(
dim_x
[
i
],
0
,
0
,
...
@@ -149,8 +149,8 @@ static void Interpolate2DInferShapeCheck(framework::InferShapeContext* ctx) {
...
@@ -149,8 +149,8 @@ static void Interpolate2DInferShapeCheck(framework::InferShapeContext* ctx) {
"Interpolation method can only be
\"
bilinear
\"
or
\"
nearest
\"
when "
"Interpolation method can only be
\"
bilinear
\"
or
\"
nearest
\"
when "
"Input(X) dimension is 4, but got method = %s."
,
"Input(X) dimension is 4, but got method = %s."
,
interp_method
));
interp_method
));
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
for
(
int
i
=
0
;
i
<
dim_x
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
dim_x
.
size
();
++
i
)
{
PADDLE_ENFORCE_NE
(
dim_x
[
i
],
PADDLE_ENFORCE_NE
(
dim_x
[
i
],
...
@@ -278,8 +278,8 @@ static void Interpolate3DInferShapeCheck(framework::InferShapeContext* ctx) {
...
@@ -278,8 +278,8 @@ static void Interpolate3DInferShapeCheck(framework::InferShapeContext* ctx) {
"
\"
nearest
\"
when Input(X) "
"
\"
nearest
\"
when Input(X) "
"dimension is 5, but got method = %s ."
,
"dimension is 5, but got method = %s ."
,
interp_method
));
interp_method
));
const
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
const
DataLayout
data_layout
=
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
phi
::
StringToDataLayout
(
ctx
->
Attrs
().
Get
<
std
::
string
>
(
"data_layout"
));
for
(
int
i
=
0
;
i
<
dim_x
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
dim_x
.
size
();
++
i
)
{
PADDLE_ENFORCE_NE
(
dim_x
[
i
],
PADDLE_ENFORCE_NE
(
dim_x
[
i
],
...
@@ -452,15 +452,15 @@ class InterpolateV2Op : public framework::OperatorWithKernel {
...
@@ -452,15 +452,15 @@ class InterpolateV2Op : public framework::OperatorWithKernel {
const
phi
::
DenseTensor
&
tensor
,
const
phi
::
DenseTensor
&
tensor
,
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
((
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_layout"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for pool
// Some models may have intentionally set "AnyLayout" for pool
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
...
paddle/fluid/operators/interpolate_v2_op_mlu.cc
浏览文件 @
ec749398
...
@@ -20,7 +20,7 @@ limitations under the License. */
...
@@ -20,7 +20,7 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
inline
std
::
vector
<
int
>
get_new_shape_mlu
(
inline
std
::
vector
<
int
>
get_new_shape_mlu
(
const
std
::
vector
<
const
phi
::
DenseTensor
*>&
list_new_shape_tensor
)
{
const
std
::
vector
<
const
phi
::
DenseTensor
*>&
list_new_shape_tensor
)
{
...
@@ -61,8 +61,7 @@ class InterpolateV2MLUKernel : public framework::OpKernel<T> {
...
@@ -61,8 +61,7 @@ class InterpolateV2MLUKernel : public framework::OpKernel<T> {
"range less or equal than 5. "
));
"range less or equal than 5. "
));
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -373,8 +372,7 @@ class InterpolateV2GradMLUKernel : public framework::OpKernel<T> {
...
@@ -373,8 +372,7 @@ class InterpolateV2GradMLUKernel : public framework::OpKernel<T> {
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
*
input
=
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
);
auto
input_dims
=
input
->
dims
();
auto
input_dims
=
input
->
dims
();
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
...
paddle/fluid/operators/interpolate_v2_op_npu.cc
浏览文件 @
ec749398
...
@@ -20,7 +20,7 @@ namespace paddle {
...
@@ -20,7 +20,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
using
DDim
=
framework
::
DDim
;
using
DDim
=
framework
::
DDim
;
using
fp16
=
paddle
::
platform
::
float16
;
using
fp16
=
paddle
::
platform
::
float16
;
...
@@ -499,8 +499,7 @@ class InterpolateV2NPUKernel : public framework::OpKernel<T> {
...
@@ -499,8 +499,7 @@ class InterpolateV2NPUKernel : public framework::OpKernel<T> {
"NPU Interpolate Kernel only support 4-D Tensor."
));
"NPU Interpolate Kernel only support 4-D Tensor."
));
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
phi
::
funcs
::
ExtractNCDWH
(
phi
::
funcs
::
ExtractNCDWH
(
input_dims
,
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
input_dims
,
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
@@ -652,8 +651,7 @@ class InterpolateV2NPUGradKernel : public framework::OpKernel<T> {
...
@@ -652,8 +651,7 @@ class InterpolateV2NPUGradKernel : public framework::OpKernel<T> {
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_layout"
);
const
DataLayout
data_layout
=
const
DataLayout
data_layout
=
phi
::
StringToDataLayout
(
data_layout_str
);
framework
::
StringToDataLayout
(
data_layout_str
);
int
n
,
c
,
in_d
,
in_h
,
in_w
;
int
n
,
c
,
in_d
,
in_h
,
in_w
;
phi
::
funcs
::
ExtractNCDWH
(
phi
::
funcs
::
ExtractNCDWH
(
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
input
->
dims
(),
data_layout
,
&
n
,
&
c
,
&
in_d
,
&
in_h
,
&
in_w
);
...
...
paddle/fluid/operators/layer_norm_op.cc
浏览文件 @
ec749398
...
@@ -26,7 +26,7 @@ namespace operators {
...
@@ -26,7 +26,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
LoDTensor
=
phi
::
DenseTensor
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
class
LayerNormOp
:
public
framework
::
OperatorWithKernel
{
class
LayerNormOp
:
public
framework
::
OperatorWithKernel
{
public:
public:
...
@@ -118,7 +118,7 @@ class LayerNormOp : public framework::OperatorWithKernel {
...
@@ -118,7 +118,7 @@ class LayerNormOp : public framework::OperatorWithKernel {
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
)
->
dims
().
size
()
-
1
)
{
ctx
.
Input
<
phi
::
DenseTensor
>
(
"X"
)
->
dims
().
size
()
-
1
)
{
return
framework
::
OpKernelType
(
input_data_type
,
return
framework
::
OpKernelType
(
input_data_type
,
ctx
.
GetPlace
(),
ctx
.
GetPlace
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
framework
::
LibraryType
::
kMKLDNN
);
framework
::
LibraryType
::
kMKLDNN
);
}
}
#endif
#endif
...
@@ -229,7 +229,7 @@ class LayerNormGradOp : public framework::OperatorWithKernel {
...
@@ -229,7 +229,7 @@ class LayerNormGradOp : public framework::OperatorWithKernel {
t
,
platform
::
errors
::
NotFound
(
"Y@GRAD of LayerNorm Op is not found."
));
t
,
platform
::
errors
::
NotFound
(
"Y@GRAD of LayerNorm Op is not found."
));
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
LibraryType
library
=
framework
::
LibraryType
::
kPlain
;
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
),
...
...
paddle/fluid/operators/layer_norm_op_npu.cc
浏览文件 @
ec749398
...
@@ -21,7 +21,7 @@ namespace operators {
...
@@ -21,7 +21,7 @@ namespace operators {
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
DDim
=
framework
::
DDim
;
using
DDim
=
framework
::
DDim
;
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
class
NormDataType
;
class
NormDataType
;
...
...
paddle/fluid/operators/lrn_op.cc
浏览文件 @
ec749398
...
@@ -27,7 +27,7 @@ limitations under the License. */
...
@@ -27,7 +27,7 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
struct
LRNFunctor
<
phi
::
CPUContext
,
T
>
{
struct
LRNFunctor
<
phi
::
CPUContext
,
T
>
{
...
@@ -233,15 +233,15 @@ class LRNOp : public framework::OperatorWithKernel {
...
@@ -233,15 +233,15 @@ class LRNOp : public framework::OperatorWithKernel {
const
phi
::
DenseTensor
&
tensor
,
const
phi
::
DenseTensor
&
tensor
,
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
((
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for lrn
// Some models may have intentionally set "AnyLayout" for lrn
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
@@ -357,15 +357,15 @@ class LRNOpGrad : public framework::OperatorWithKernel {
...
@@ -357,15 +357,15 @@ class LRNOpGrad : public framework::OperatorWithKernel {
const
phi
::
DenseTensor
&
tensor
,
const
phi
::
DenseTensor
&
tensor
,
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
const
framework
::
OpKernelType
&
expected_kernel_type
)
const
override
{
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
if
((
expected_kernel_type
.
data_layout_
==
framework
::
DataLayout
::
kMKLDNN
)
&&
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
))
{
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
))
{
auto
attrs
=
Attrs
();
auto
attrs
=
Attrs
();
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
auto
ar
=
paddle
::
framework
::
AttrReader
(
attrs
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_format
=
ar
.
Get
<
std
::
string
>
(
"data_format"
);
auto
dl
=
framework
::
StringToDataLayout
(
data_format
);
auto
dl
=
phi
::
StringToDataLayout
(
data_format
);
// Some models may have intentionally set "AnyLayout" for lrn
// Some models may have intentionally set "AnyLayout" for lrn
// op. Treat this as NCHW (default data_format value)
// op. Treat this as NCHW (default data_format value)
if
(
dl
!=
framework
::
DataLayout
::
kAnyLayout
)
{
if
(
dl
!=
phi
::
DataLayout
::
kAnyLayout
)
{
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
dl
);
}
}
...
...
paddle/fluid/operators/lrn_op.cu
浏览文件 @
ec749398
...
@@ -17,7 +17,7 @@ limitations under the License. */
...
@@ -17,7 +17,7 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
T
>
template
<
typename
T
>
__global__
void
KeCMRNormFillScale
(
int
img_size
,
__global__
void
KeCMRNormFillScale
(
int
img_size
,
...
...
paddle/fluid/operators/lrn_op.h
浏览文件 @
ec749398
...
@@ -24,7 +24,7 @@ limitations under the License. */
...
@@ -24,7 +24,7 @@ limitations under the License. */
namespace
paddle
{
namespace
paddle
{
namespace
operators
{
namespace
operators
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
template
<
typename
place
,
typename
T
>
template
<
typename
place
,
typename
T
>
struct
LRNFunctor
{
struct
LRNFunctor
{
...
@@ -57,8 +57,8 @@ class LRNKernel : public framework::OpKernel<T> {
...
@@ -57,8 +57,8 @@ class LRNKernel : public framework::OpKernel<T> {
auto
x_dims
=
x
.
dims
();
auto
x_dims
=
x
.
dims
();
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
framework
::
DataLayout
data_layout
=
const
phi
::
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
phi
::
StringToDataLayout
(
data_layout_str
);
// NCHW
// NCHW
int
N
=
x_dims
[
0
];
int
N
=
x_dims
[
0
];
int
C
=
(
data_layout
!=
DataLayout
::
kNHWC
?
x_dims
[
1
]
:
x_dims
[
3
]);
int
C
=
(
data_layout
!=
DataLayout
::
kNHWC
?
x_dims
[
1
]
:
x_dims
[
3
]);
...
@@ -149,8 +149,8 @@ class LRNGradKernel : public framework::OpKernel<T> {
...
@@ -149,8 +149,8 @@ class LRNGradKernel : public framework::OpKernel<T> {
*
ctx
.
Input
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Out"
));
*
ctx
.
Input
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"Out"
));
const
phi
::
DenseTensor
&
mid
=
*
ctx
.
Input
<
phi
::
DenseTensor
>
(
"MidOut"
);
const
phi
::
DenseTensor
&
mid
=
*
ctx
.
Input
<
phi
::
DenseTensor
>
(
"MidOut"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
std
::
string
data_layout_str
=
ctx
.
Attr
<
std
::
string
>
(
"data_format"
);
const
framework
::
DataLayout
data_layout
=
const
phi
::
DataLayout
data_layout
=
framework
::
StringToDataLayout
(
data_layout_str
);
phi
::
StringToDataLayout
(
data_layout_str
);
auto
x_g
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
auto
x_g
=
ctx
.
Output
<
phi
::
DenseTensor
>
(
framework
::
GradVarName
(
"X"
));
x_g
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
x_g
->
mutable_data
<
T
>
(
ctx
.
GetPlace
());
...
...
paddle/fluid/operators/math/im2col.h
浏览文件 @
ec749398
...
@@ -24,7 +24,7 @@ namespace paddle {
...
@@ -24,7 +24,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
namespace
math
{
namespace
math
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
/* The storage format of the coldata in the Im2ColFunctor and Col2ImFunctor. */
/* The storage format of the coldata in the Im2ColFunctor and Col2ImFunctor. */
enum
class
ColFormat
{
kCFO
=
0
,
kOCF
=
1
};
enum
class
ColFormat
{
kCFO
=
0
,
kOCF
=
1
};
...
...
paddle/fluid/operators/math/vol2col.h
浏览文件 @
ec749398
...
@@ -24,7 +24,7 @@ namespace paddle {
...
@@ -24,7 +24,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
namespace
math
{
namespace
math
{
using
DataLayout
=
framework
::
DataLayout
;
using
DataLayout
=
phi
::
DataLayout
;
/*
/*
* \brief Converts the feature data of four dimensions(CDHW) into a colData of
* \brief Converts the feature data of four dimensions(CDHW) into a colData of
...
...
paddle/fluid/operators/matmul_op.cc
浏览文件 @
ec749398
...
@@ -588,7 +588,7 @@ class MatMulOp : public framework::OperatorWithKernel {
...
@@ -588,7 +588,7 @@ class MatMulOp : public framework::OperatorWithKernel {
bool
channelwise_onednn
=
bool
channelwise_onednn
=
context
->
IsRunMKLDNNKernel
()
&&
context
->
IsRunMKLDNNKernel
()
&&
(
platform
::
MKLDNNDeviceContext
::
tls
().
get_cur_paddle_data_layout
()
==
(
platform
::
MKLDNNDeviceContext
::
tls
().
get_cur_paddle_data_layout
()
==
framework
::
DataLayout
::
kNHWC
);
phi
::
DataLayout
::
kNHWC
);
if
(
channelwise_onednn
)
{
if
(
channelwise_onednn
)
{
std
::
swap
(
dim_x
,
dim_y
);
std
::
swap
(
dim_x
,
dim_y
);
}
}
...
@@ -715,15 +715,13 @@ class MatMulOp : public framework::OperatorWithKernel {
...
@@ -715,15 +715,13 @@ class MatMulOp : public framework::OperatorWithKernel {
// When matmul is first oneDNN op in a chain (there was some non oneDNN op
// When matmul is first oneDNN op in a chain (there was some non oneDNN op
// previously)
// previously)
// then we also need to rotate shape NHWC -> NCWH
// then we also need to rotate shape NHWC -> NCWH
if
((
expected_kernel_type
.
data_layout_
==
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
framework
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
)
&&
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
()
==
.
get_cur_paddle_data_layout
()
==
phi
::
DataLayout
::
kNHWC
)
{
framework
::
DataLayout
::
kNHWC
)
{
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
tensor
.
place
(),
framework
::
DataLayout
::
kNHWC
);
phi
::
DataLayout
::
kNHWC
);
}
}
#endif
#endif
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
...
...
paddle/fluid/operators/matmul_v2_op.cc
浏览文件 @
ec749398
...
@@ -152,15 +152,13 @@ class MatMulV2Op : public framework::OperatorWithKernel {
...
@@ -152,15 +152,13 @@ class MatMulV2Op : public framework::OperatorWithKernel {
#ifdef PADDLE_WITH_MKLDNN
#ifdef PADDLE_WITH_MKLDNN
// When matmul_v2 is first oneDNN op in a chain (there was some non oneDNN
// When matmul_v2 is first oneDNN op in a chain (there was some non oneDNN
// op previously) then we also need to rotate shape NHWC -> NCWH
// op previously) then we also need to rotate shape NHWC -> NCWH
if
((
expected_kernel_type
.
data_layout_
==
if
((
expected_kernel_type
.
data_layout_
==
phi
::
DataLayout
::
kMKLDNN
)
&&
framework
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
phi
::
DataLayout
::
kMKLDNN
)
&&
(
tensor
.
layout
()
!=
framework
::
DataLayout
::
kMKLDNN
)
&&
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
paddle
::
platform
::
MKLDNNDeviceContext
::
tls
()
.
get_cur_paddle_data_layout
()
==
.
get_cur_paddle_data_layout
()
==
phi
::
DataLayout
::
kNHWC
)
{
framework
::
DataLayout
::
kNHWC
)
{
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
return
framework
::
OpKernelType
(
expected_kernel_type
.
data_type_
,
tensor
.
place
(),
tensor
.
place
(),
framework
::
DataLayout
::
kNHWC
);
phi
::
DataLayout
::
kNHWC
);
}
}
#endif
#endif
return
framework
::
OpKernelType
(
return
framework
::
OpKernelType
(
...
...
paddle/fluid/operators/matrix_rank_op.cc
浏览文件 @
ec749398
...
@@ -87,7 +87,7 @@ class MatrixRankOp : public framework::OperatorWithKernel {
...
@@ -87,7 +87,7 @@ class MatrixRankOp : public framework::OperatorWithKernel {
framework
::
OpKernelType
GetExpectedKernelType
(
framework
::
OpKernelType
GetExpectedKernelType
(
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
const
framework
::
ExecutionContext
&
ctx
)
const
override
{
framework
::
LibraryType
library
{
framework
::
LibraryType
::
kPlain
};
framework
::
LibraryType
library
{
framework
::
LibraryType
::
kPlain
};
framework
::
DataLayout
layout
=
framework
::
DataLayout
::
kAnyLayout
;
phi
::
DataLayout
layout
=
phi
::
DataLayout
::
kAnyLayout
;
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
auto
data_type
=
OperatorWithKernel
::
IndicateVarDataType
(
ctx
,
"X"
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
return
framework
::
OpKernelType
(
data_type
,
ctx
.
GetPlace
(),
layout
,
library
);
}
}
...
...
paddle/fluid/operators/mkldnn/activation_mkldnn_op.cc
浏览文件 @
ec749398
...
@@ -26,7 +26,7 @@ namespace operators {
...
@@ -26,7 +26,7 @@ namespace operators {
using
dnnl
::
memory
;
using
dnnl
::
memory
;
using
dnnl
::
primitive
;
using
dnnl
::
primitive
;
using
dnnl
::
stream
;
using
dnnl
::
stream
;
using
framework
::
DataLayout
;
using
phi
::
DataLayout
;
using
platform
::
GetMKLDNNFormat
;
using
platform
::
GetMKLDNNFormat
;
using
platform
::
MKLDNNDeviceContext
;
using
platform
::
MKLDNNDeviceContext
;
...
...
paddle/fluid/operators/mkldnn/conv_mkldnn_op.cc
浏览文件 @
ec749398
...
@@ -96,18 +96,18 @@ class ConvMKLDNNHandlerT
...
@@ -96,18 +96,18 @@ class ConvMKLDNNHandlerT
if
(
unlikely
(
!
this
->
isCached
()))
{
if
(
unlikely
(
!
this
->
isCached
()))
{
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
input
->
layout
(),
input
->
layout
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"The input tensor's layout should be %d, but got %d."
,
"The input tensor's layout should be %d, but got %d."
,
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
input
->
layout
()));
input
->
layout
()));
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
filter
->
layout
(),
filter
->
layout
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"The Filter tensor's layout should be %d, but got %d."
,
"The Filter tensor's layout should be %d, but got %d."
,
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
filter
->
layout
()));
filter
->
layout
()));
PADDLE_ENFORCE_GE
(
PADDLE_ENFORCE_GE
(
...
@@ -143,10 +143,10 @@ class ConvMKLDNNHandlerT
...
@@ -143,10 +143,10 @@ class ConvMKLDNNHandlerT
if
(
bias
)
{
if
(
bias
)
{
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
bias
->
layout
(),
bias
->
layout
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"The Bias tensor's layout should be %d, but got %d."
,
"The Bias tensor's layout should be %d, but got %d."
,
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
bias
->
layout
()));
bias
->
layout
()));
PADDLE_ENFORCE_EQ
(
bias
->
dims
().
size
(),
PADDLE_ENFORCE_EQ
(
bias
->
dims
().
size
(),
...
@@ -293,26 +293,26 @@ class ConvMKLDNNHandlerT
...
@@ -293,26 +293,26 @@ class ConvMKLDNNHandlerT
if
(
unlikely
(
!
this
->
isBwdCached
()))
{
if
(
unlikely
(
!
this
->
isBwdCached
()))
{
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
in
->
layout
(),
in
->
layout
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"The input tensor's layout should be %d, but got %d."
,
"The input tensor's layout should be %d, but got %d."
,
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
in
->
layout
()));
in
->
layout
()));
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
filter
->
layout
(),
filter
->
layout
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"The filter tensor's layout should be %d, but got %d."
,
"The filter tensor's layout should be %d, but got %d."
,
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
filter
->
layout
()));
filter
->
layout
()));
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
out_grad
->
layout
(),
out_grad
->
layout
(),
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
platform
::
errors
::
InvalidArgument
(
platform
::
errors
::
InvalidArgument
(
"The output_grad tensor's layout should be %d, but got %d."
,
"The output_grad tensor's layout should be %d, but got %d."
,
framework
::
DataLayout
::
kMKLDNN
,
phi
::
DataLayout
::
kMKLDNN
,
out_grad
->
layout
()));
out_grad
->
layout
()));
PADDLE_ENFORCE_EQ
(
PADDLE_ENFORCE_EQ
(
...
...
paddle/fluid/operators/mkldnn/conv_transpose_mkldnn_op.cc
浏览文件 @
ec749398
...
@@ -22,7 +22,7 @@ namespace paddle {
...
@@ -22,7 +22,7 @@ namespace paddle {
namespace
operators
{
namespace
operators
{
using
Tensor
=
phi
::
DenseTensor
;
using
Tensor
=
phi
::
DenseTensor
;
using
framework
::
DataLayout
;
using
phi
::
DataLayout
;
inline
dnnl
::
memory
::
dims
GetWeightsTz
(
const
phi
::
DenseTensor
*
filter
,
inline
dnnl
::
memory
::
dims
GetWeightsTz
(
const
phi
::
DenseTensor
*
filter
,
const
int
groups
)
{
const
int
groups
)
{
...
...
paddle/fluid/operators/mkldnn/dequantize_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/fc_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/interpolate_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/lrn_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/matmul_v2_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/mul_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/pool_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/quantize_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/reshape_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mkldnn/transpose_mkldnn_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mlu/mlu_baseop.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mode_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/mul_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/norm_utils.cu.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/norm_utils.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/optimizers/sgd_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/pad2d_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/pad3d_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/pool_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/prelu_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/quantize_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/reduce_ops/reduce_op.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/requantize_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/roi_align_op_mlu.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/sequence_ops/sequence_softmax_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/slice_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/softmax_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/split_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/squeeze_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/sum_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/sync_batch_norm_op_mlu.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/sync_batch_norm_op_npu.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/temporal_shift_op.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/temporal_shift_op.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/tensor_formatter.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/top_k_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/top_k_v2_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/transfer_layout_op.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/transpose_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/truncated_gaussian_random_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/operators/warpctc_op.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/platform/device/npu/npu_op_runner.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/platform/mkldnn_helper.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/platform/mkldnn_reuse.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/pybind/eager_properties.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/pybind/imperative.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/fluid/pybind/tensor.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/infrt/host_context/value.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/api/include/tensor.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/capi/include/type_utils.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/common/layout.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/core/dense_tensor.inl
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/core/dense_tensor_impl.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/core/kernel_factory.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/core/tensor_meta.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/infermeta/binary.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/infermeta/multiary.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/infermeta/ternary.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/batch_norm_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/batch_norm_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/group_norm_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/group_norm_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/interpolate_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/interpolate_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/temporal_shift_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/cpu/temporal_shift_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/funcs/data_layout_transform.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/batch_norm_grad_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/batch_norm_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/conv_transpose_grad_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/conv_transpose_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/depthwise_conv.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/depthwise_conv_grad_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/depthwise_conv_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/group_norm_grad_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/group_norm_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/interpolate_grad_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/interpolate_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/sync_batch_norm_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/sync_batch_norm_utils.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/temporal_shift_grad_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/gpu/temporal_shift_kernel.cu
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/impl/conv_transpose_grad_kernel_impl.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/impl/conv_transpose_kernel_impl.h
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/batch_norm_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/batch_norm_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/grid_sample_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/interpolate_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/interpolate_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/temporal_shift_grad_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
paddle/phi/kernels/xpu/temporal_shift_kernel.cc
浏览文件 @
ec749398
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录