Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
f1a3f4f7
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 1 年 前同步成功
通知
2298
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
f1a3f4f7
编写于
3月 16, 2023
作者:
W
wanghuancoder
提交者:
GitHub
3月 16, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Del old dygraph optest4 (#51610)
* delete old dygraph op test
上级
66e0720d
变更
110
展开全部
隐藏空白更改
内联
并排
Showing
110 changed file
with
886 addition
and
729 deletion
+886
-729
paddle/fluid/eager/auto_code_generator/generator/codegen_utils.py
...luid/eager/auto_code_generator/generator/codegen_utils.py
+6
-0
paddle/fluid/eager/grad_node_info.cc
paddle/fluid/eager/grad_node_info.cc
+1
-0
paddle/fluid/pybind/eager_generator.h
paddle/fluid/pybind/eager_generator.h
+3
-0
python/paddle/fluid/tests/unittests/eager_op_test.py
python/paddle/fluid/tests/unittests/eager_op_test.py
+58
-98
python/paddle/fluid/tests/unittests/test_activation_op.py
python/paddle/fluid/tests/unittests/test_activation_op.py
+3
-1
python/paddle/fluid/tests/unittests/test_affine_channel_op.py
...on/paddle/fluid/tests/unittests/test_affine_channel_op.py
+11
-4
python/paddle/fluid/tests/unittests/test_bilinear_interp_op.py
...n/paddle/fluid/tests/unittests/test_bilinear_interp_op.py
+5
-17
python/paddle/fluid/tests/unittests/test_box_coder_op.py
python/paddle/fluid/tests/unittests/test_box_coder_op.py
+48
-5
python/paddle/fluid/tests/unittests/test_bpr_loss_op.py
python/paddle/fluid/tests/unittests/test_bpr_loss_op.py
+5
-3
python/paddle/fluid/tests/unittests/test_broadcast_error.py
python/paddle/fluid/tests/unittests/test_broadcast_error.py
+1
-1
python/paddle/fluid/tests/unittests/test_broadcast_tensors_op.py
...paddle/fluid/tests/unittests/test_broadcast_tensors_op.py
+2
-4
python/paddle/fluid/tests/unittests/test_center_loss.py
python/paddle/fluid/tests/unittests/test_center_loss.py
+3
-3
python/paddle/fluid/tests/unittests/test_class_center_sample_op.py
...ddle/fluid/tests/unittests/test_class_center_sample_op.py
+40
-38
python/paddle/fluid/tests/unittests/test_clip_by_norm_op.py
python/paddle/fluid/tests/unittests/test_clip_by_norm_op.py
+3
-5
python/paddle/fluid/tests/unittests/test_clip_op.py
python/paddle/fluid/tests/unittests/test_clip_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_coalesce_tensor_op.py
...n/paddle/fluid/tests/unittests/test_coalesce_tensor_op.py
+9
-3
python/paddle/fluid/tests/unittests/test_collect_fpn_proposals_op.py
...le/fluid/tests/unittests/test_collect_fpn_proposals_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_compare_op.py
python/paddle/fluid/tests/unittests/test_compare_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_compare_reduce_op.py
...on/paddle/fluid/tests/unittests/test_compare_reduce_op.py
+4
-4
python/paddle/fluid/tests/unittests/test_complex_abs.py
python/paddle/fluid/tests/unittests/test_complex_abs.py
+4
-7
python/paddle/fluid/tests/unittests/test_complex_view_op.py
python/paddle/fluid/tests/unittests/test_complex_view_op.py
+3
-5
python/paddle/fluid/tests/unittests/test_concat_op.py
python/paddle/fluid/tests/unittests/test_concat_op.py
+12
-12
python/paddle/fluid/tests/unittests/test_conj_op.py
python/paddle/fluid/tests/unittests/test_conj_op.py
+2
-3
python/paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
.../paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
+47
-1
python/paddle/fluid/tests/unittests/test_conv3d_op.py
python/paddle/fluid/tests/unittests/test_conv3d_op.py
+206
-178
python/paddle/fluid/tests/unittests/test_conv3d_transpose_op.py
.../paddle/fluid/tests/unittests/test_conv3d_transpose_op.py
+57
-16
python/paddle/fluid/tests/unittests/test_conv_shift_op.py
python/paddle/fluid/tests/unittests/test_conv_shift_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_cos_sim_op.py
python/paddle/fluid/tests/unittests/test_cos_sim_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_crf_decoding_op.py
python/paddle/fluid/tests/unittests/test_crf_decoding_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_crop_op.py
python/paddle/fluid/tests/unittests/test_crop_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_crop_tensor_op.py
python/paddle/fluid/tests/unittests/test_crop_tensor_op.py
+5
-5
python/paddle/fluid/tests/unittests/test_cross_entropy2_op.py
...on/paddle/fluid/tests/unittests/test_cross_entropy2_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_cross_entropy_op.py
python/paddle/fluid/tests/unittests/test_cross_entropy_op.py
+13
-12
python/paddle/fluid/tests/unittests/test_cross_op.py
python/paddle/fluid/tests/unittests/test_cross_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_cumprod_op.py
python/paddle/fluid/tests/unittests/test_cumprod_op.py
+3
-4
python/paddle/fluid/tests/unittests/test_cvm_op.py
python/paddle/fluid/tests/unittests/test_cvm_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_decayed_adagrad_op.py
...n/paddle/fluid/tests/unittests/test_decayed_adagrad_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_deformable_conv_op.py
...n/paddle/fluid/tests/unittests/test_deformable_conv_op.py
+2
-3
python/paddle/fluid/tests/unittests/test_deformable_conv_v1_op.py
...addle/fluid/tests/unittests/test_deformable_conv_v1_op.py
+2
-4
python/paddle/fluid/tests/unittests/test_density_prior_box_op.py
...paddle/fluid/tests/unittests/test_density_prior_box_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_dequantize_abs_max_op.py
...addle/fluid/tests/unittests/test_dequantize_abs_max_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_dequantize_log_op.py
...on/paddle/fluid/tests/unittests/test_dequantize_log_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_determinant_op.py
python/paddle/fluid/tests/unittests/test_determinant_op.py
+5
-7
python/paddle/fluid/tests/unittests/test_diag.py
python/paddle/fluid/tests/unittests/test_diag.py
+1
-1
python/paddle/fluid/tests/unittests/test_diag_embed.py
python/paddle/fluid/tests/unittests/test_diag_embed.py
+23
-22
python/paddle/fluid/tests/unittests/test_diag_v2.py
python/paddle/fluid/tests/unittests/test_diag_v2.py
+3
-3
python/paddle/fluid/tests/unittests/test_diagonal_op.py
python/paddle/fluid/tests/unittests/test_diagonal_op.py
+3
-4
python/paddle/fluid/tests/unittests/test_digamma_op.py
python/paddle/fluid/tests/unittests/test_digamma_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_dist_op.py
python/paddle/fluid/tests/unittests/test_dist_op.py
+2
-3
python/paddle/fluid/tests/unittests/test_distribute_fpn_proposals_op.py
...fluid/tests/unittests/test_distribute_fpn_proposals_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_dpsgd_op.py
python/paddle/fluid/tests/unittests/test_dpsgd_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_dropout_op.py
python/paddle/fluid/tests/unittests/test_dropout_op.py
+34
-1
python/paddle/fluid/tests/unittests/test_edit_distance_op.py
python/paddle/fluid/tests/unittests/test_edit_distance_op.py
+4
-4
python/paddle/fluid/tests/unittests/test_eig_op.py
python/paddle/fluid/tests/unittests/test_eig_op.py
+2
-1
python/paddle/fluid/tests/unittests/test_eigh_op.py
python/paddle/fluid/tests/unittests/test_eigh_op.py
+4
-3
python/paddle/fluid/tests/unittests/test_eigvals_op.py
python/paddle/fluid/tests/unittests/test_eigvals_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_eigvalsh_op.py
python/paddle/fluid/tests/unittests/test_eigvalsh_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_elementwise_floordiv_op.py
...dle/fluid/tests/unittests/test_elementwise_floordiv_op.py
+8
-7
python/paddle/fluid/tests/unittests/test_elementwise_heaviside_op.py
...le/fluid/tests/unittests/test_elementwise_heaviside_op.py
+5
-6
python/paddle/fluid/tests/unittests/test_elementwise_mod_op.py
...n/paddle/fluid/tests/unittests/test_elementwise_mod_op.py
+7
-7
python/paddle/fluid/tests/unittests/test_empty_op.py
python/paddle/fluid/tests/unittests/test_empty_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_erfinv_op.py
python/paddle/fluid/tests/unittests/test_erfinv_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_executor_return_tensor_not_overwriting.py
.../unittests/test_executor_return_tensor_not_overwriting.py
+4
-1
python/paddle/fluid/tests/unittests/test_expand_as_v2_op.py
python/paddle/fluid/tests/unittests/test_expand_as_v2_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_eye_op.py
python/paddle/fluid/tests/unittests/test_eye_op.py
+4
-4
python/paddle/fluid/tests/unittests/test_fake_dequantize_op.py
...n/paddle/fluid/tests/unittests/test_fake_dequantize_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fc_op.py
python/paddle/fluid/tests/unittests/test_fc_op.py
+40
-31
python/paddle/fluid/tests/unittests/test_fill_constant_batch_size_like.py
...uid/tests/unittests/test_fill_constant_batch_size_like.py
+2
-2
python/paddle/fluid/tests/unittests/test_fill_diagonal_tensor_op.py
...dle/fluid/tests/unittests/test_fill_diagonal_tensor_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_fill_op.py
python/paddle/fluid/tests/unittests/test_fill_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fill_zeros_like2_op.py
.../paddle/fluid/tests/unittests/test_fill_zeros_like2_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fill_zeros_like_op.py
...n/paddle/fluid/tests/unittests/test_fill_zeros_like_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_filter_by_instag_op.py
.../paddle/fluid/tests/unittests/test_filter_by_instag_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_flatten2_op.py
python/paddle/fluid/tests/unittests/test_flatten2_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_flatten_contiguous_range_op.py
...fluid/tests/unittests/test_flatten_contiguous_range_op.py
+3
-5
python/paddle/fluid/tests/unittests/test_flatten_op.py
python/paddle/fluid/tests/unittests/test_flatten_op.py
+23
-22
python/paddle/fluid/tests/unittests/test_flip.py
python/paddle/fluid/tests/unittests/test_flip.py
+3
-3
python/paddle/fluid/tests/unittests/test_fmax_op.py
python/paddle/fluid/tests/unittests/test_fmax_op.py
+7
-11
python/paddle/fluid/tests/unittests/test_fmin_op.py
python/paddle/fluid/tests/unittests/test_fmin_op.py
+7
-11
python/paddle/fluid/tests/unittests/test_fold_op.py
python/paddle/fluid/tests/unittests/test_fold_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_frame_op.py
python/paddle/fluid/tests/unittests/test_frame_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_fsp_op.py
python/paddle/fluid/tests/unittests/test_fsp_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_ftrl_op.py
python/paddle/fluid/tests/unittests/test_ftrl_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_full_like_op.py
python/paddle/fluid/tests/unittests/test_full_like_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_fused_adam_op.py
python/paddle/fluid/tests/unittests/test_fused_adam_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_fused_attention_op.py
...n/paddle/fluid/tests/unittests/test_fused_attention_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_bias_dropout_residual_layer_norm_op.py
...ittests/test_fused_bias_dropout_residual_layer_norm_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_ec_moe_op.py
python/paddle/fluid/tests/unittests/test_fused_ec_moe_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_elemwise_activation_op.py
...luid/tests/unittests/test_fused_elemwise_activation_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_emb_seq_pool_op.py
...addle/fluid/tests/unittests/test_fused_emb_seq_pool_op.py
+25
-24
python/paddle/fluid/tests/unittests/test_fused_embedding_fc_lstm_op.py
.../fluid/tests/unittests/test_fused_embedding_fc_lstm_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_fc_elementwise_layernorm_op.py
...tests/unittests/test_fused_fc_elementwise_layernorm_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_fused_feedforward_op.py
...paddle/fluid/tests/unittests/test_fused_feedforward_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_gate_attention_op.py
...dle/fluid/tests/unittests/test_fused_gate_attention_op.py
+5
-1
python/paddle/fluid/tests/unittests/test_fused_gemm_epilogue_grad_op.py
...fluid/tests/unittests/test_fused_gemm_epilogue_grad_op.py
+13
-5
python/paddle/fluid/tests/unittests/test_fused_gemm_epilogue_op.py
...ddle/fluid/tests/unittests/test_fused_gemm_epilogue_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_multi_transformer_op.py
.../fluid/tests/unittests/test_fused_multi_transformer_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fused_multihead_matmul_op.py
...e/fluid/tests/unittests/test_fused_multihead_matmul_op.py
+3
-3
python/paddle/fluid/tests/unittests/test_fused_token_prune_op.py
...paddle/fluid/tests/unittests/test_fused_token_prune_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_gru_op.py
python/paddle/fluid/tests/unittests/test_fusion_gru_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_lstm_op.py
python/paddle/fluid/tests/unittests/test_fusion_lstm_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_repeated_fc_relu_op.py
.../fluid/tests/unittests/test_fusion_repeated_fc_relu_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_seqconv_eltadd_relu_op.py
...uid/tests/unittests/test_fusion_seqconv_eltadd_relu_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_seqexpand_concat_fc_op.py
...uid/tests/unittests/test_fusion_seqexpand_concat_fc_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_seqpool_concat_op.py
...le/fluid/tests/unittests/test_fusion_seqpool_concat_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_seqpool_cvm_concat_op.py
...luid/tests/unittests/test_fusion_seqpool_cvm_concat_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_squared_mat_sub_op.py
...e/fluid/tests/unittests/test_fusion_squared_mat_sub_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_fusion_transpose_flatten_concat_op.py
...ests/unittests/test_fusion_transpose_flatten_concat_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_nearest_interp_op.py
...on/paddle/fluid/tests/unittests/test_nearest_interp_op.py
+5
-19
python/paddle/vision/ops.py
python/paddle/vision/ops.py
+2
-1
未找到文件。
paddle/fluid/eager/auto_code_generator/generator/codegen_utils.py
浏览文件 @
f1a3f4f7
...
...
@@ -62,6 +62,12 @@ ops_to_fill_zero_for_empty_grads = set(
"concat_double_grad"
,
"expand_grad"
,
"argsort_grad"
,
"eigh_grad"
,
"add_grad"
,
"subtract_grad"
,
"multiply_grad"
,
"divide_grad"
,
"matmul_grad"
,
]
)
...
...
paddle/fluid/eager/grad_node_info.cc
浏览文件 @
f1a3f4f7
...
...
@@ -474,6 +474,7 @@ void GradNodeBase::HandleComplexGradToRealGrad(
const
paddle
::
Tensor
&
grad
=
slot_out_grads
[
rank_id
];
if
(
paddle
::
framework
::
IsComplexType
(
fwd_data_type
))
continue
;
if
(
!
grad
.
impl
())
continue
;
// Only Handle Complex To Real for DenseTensor for now
if
(
phi
::
DenseTensor
::
classof
(
grad
.
impl
().
get
()))
{
...
...
paddle/fluid/pybind/eager_generator.h
浏览文件 @
f1a3f4f7
...
...
@@ -27,7 +27,10 @@
// functions. While, for very few OPs, the dispensable inputs are used, we
// need to manually specify them in this map.
std
::
map
<
std
::
string
,
std
::
set
<
std
::
string
>>
op_ins_map
=
{
{
"fc"
,
{
"Input"
,
"W"
,
"Bias"
}},
{
"layer_norm"
,
{
"X"
,
"Scale"
,
"Bias"
}},
{
"conv2d_fusion_cutlass"
,
{
"Input"
,
"Filter"
,
"Bias"
,
"ResidualData"
}},
{
"conv2d_fusion"
,
{
"Input"
,
"Filter"
,
"Bias"
,
"ResidualData"
}},
{
"bincount"
,
{
"X"
,
"Weights"
}},
{
"fused_attention"
,
{
"X"
,
...
...
python/paddle/fluid/tests/unittests/eager_op_test.py
浏览文件 @
f1a3f4f7
...
...
@@ -34,7 +34,6 @@ from paddle.fluid.framework import (
OpProtoHolder
,
Program
,
_current_expected_place
,
in_dygraph_mode
,
)
from
paddle.fluid.op
import
Operator
...
...
@@ -914,7 +913,14 @@ class OpTest(unittest.TestCase):
"""
return
cal_python_api
(
self
.
python_api
,
args
,
kernel_sig
)
def
_calc_dygraph_output
(
self
,
place
,
parallel
=
False
,
no_check_set
=
None
):
def
_calc_dygraph_output
(
self
,
place
,
parallel
=
False
,
no_check_set
=
None
,
egr_inps
=
None
,
egr_oups
=
None
,
):
self
.
__class__
.
op_type
=
(
self
.
op_type
)
# for ci check, please not delete it for now
...
...
@@ -924,12 +930,20 @@ class OpTest(unittest.TestCase):
op_proto
=
OpProtoHolder
.
instance
().
get_op_proto
(
self
.
op_type
)
# prepare input variable
inputs
=
self
.
append_input_output_for_dygraph
(
op_proto
,
self
.
inputs
,
True
,
False
,
block
inputs
=
(
egr_inps
if
egr_inps
else
self
.
append_input_output_for_dygraph
(
op_proto
,
self
.
inputs
,
True
,
False
,
block
)
)
# prepare output variable
outputs
=
self
.
append_input_output_for_dygraph
(
op_proto
,
self
.
outputs
,
False
,
False
,
block
outputs
=
(
egr_oups
if
egr_oups
else
self
.
append_input_output_for_dygraph
(
op_proto
,
self
.
outputs
,
False
,
False
,
block
)
)
# prepare attributes
...
...
@@ -2279,35 +2293,38 @@ class OpTest(unittest.TestCase):
)
if
dygraph_outputs
is
None
:
# missing KernelSignature, fall back to eager middle output.
dygraph_outs
=
self
.
_calc_dygraph_output
(
place
)
# if outputs is None, kernel sig is empty or other error is happens.
if
not
check_dygraph
or
dygraph_outputs
is
None
:
block
.
append_op
(
type
=
self
.
op_type
,
inputs
=
inputs
,
outputs
=
outputs
,
attrs
=
attrs_outputs
if
hasattr
(
self
,
"attrs"
)
else
None
,
)
else
:
outputs
=
dygraph_outputs
dygraph_outputs
=
self
.
_calc_dygraph_output
(
place
,
egr_inps
=
inputs
,
egr_oups
=
outputs
)
outputs
=
dygraph_outputs
if
self
.
dtype
==
np
.
uint16
:
cast_inputs
=
self
.
_find_var_in_dygraph
(
outputs
,
output_names
[
0
]
)
cast_outputs
=
block
.
create_var
(
dtype
=
"float32"
,
shape
=
cast_inputs
[
0
].
shape
)
cast_op
=
block
.
append_op
(
inputs
=
{
"X"
:
cast_inputs
},
outputs
=
{
"Out"
:
cast_outputs
},
type
=
"cast"
,
attrs
=
{
"in_dtype"
:
core
.
VarDesc
.
VarType
.
BF16
,
"out_dtype"
:
core
.
VarDesc
.
VarType
.
FP32
,
},
)
if
isinstance
(
cast_inputs
,
paddle
.
Tensor
):
cast_outputs
=
paddle
.
cast
(
cast_inputs
,
core
.
VarDesc
.
VarType
.
FP32
)
elif
isinstance
(
cast_inputs
,
list
):
cast_outputs
=
[]
for
cast_input
in
cast_inputs
:
if
isinstance
(
cast_input
,
paddle
.
Tensor
):
cast_outputs
.
append
(
paddle
.
cast
(
cast_input
,
core
.
VarDesc
.
VarType
.
FP32
)
)
else
:
raise
TypeError
(
"Unsupported test data type %s."
%
type
(
cast_input
)
)
else
:
raise
TypeError
(
"Unsupported test data type %s."
%
type
(
cast_inputs
)
)
outputs
=
{
output_names
[
0
]:
cast_outputs
}
outputs_valid
=
{}
...
...
@@ -2318,61 +2335,16 @@ class OpTest(unittest.TestCase):
if
user_defined_grad_outputs
is
None
:
if
len
(
outputs_valid
)
==
1
:
loss
=
block
.
create_var
(
dtype
=
self
.
dtype
,
type
=
core
.
VarDesc
.
VarType
.
LOD_TENSOR
,
persistable
=
False
,
stop_gradient
=
False
,
shape
=
[
1
],
)
for
outputs_valid_key
in
outputs_valid
:
block
.
append_op
(
type
=
"mean"
,
inputs
=
{
"X"
:
outputs_valid
[
outputs_valid_key
]},
outputs
=
{
"Out"
:
[
loss
]},
attrs
=
None
,
)
loss
=
paddle
.
mean
(
outputs_valid
[
outputs_valid_key
][
0
])
else
:
avg_sum
=
[]
for
cur_loss
in
outputs_valid
:
cur_avg_loss
=
block
.
create_var
(
dtype
=
self
.
dtype
,
type
=
core
.
VarDesc
.
VarType
.
LOD_TENSOR
,
persistable
=
False
,
stop_gradient
=
False
,
)
block
.
append_op
(
type
=
"mean"
,
inputs
=
{
"X"
:
outputs_valid
[
cur_loss
]},
outputs
=
{
"Out"
:
[
cur_avg_loss
]},
attrs
=
None
,
)
cur_avg_loss
=
paddle
.
mean
(
outputs_valid
[
cur_loss
][
0
])
avg_sum
.
append
(
cur_avg_loss
)
loss_sum
=
block
.
create_var
(
dtype
=
self
.
dtype
,
type
=
core
.
VarDesc
.
VarType
.
LOD_TENSOR
,
persistable
=
False
,
stop_gradient
=
False
,
shape
=
[
1
],
)
block
.
append_op
(
type
=
'sum'
,
inputs
=
{
"X"
:
avg_sum
},
outputs
=
{
"Out"
:
loss_sum
},
attrs
=
None
,
)
loss
=
block
.
create_var
(
dtype
=
self
.
dtype
,
type
=
core
.
VarDesc
.
VarType
.
LOD_TENSOR
,
persistable
=
False
,
stop_gradient
=
False
,
shape
=
[
1
],
)
block
.
append_op
(
type
=
'scale'
,
inputs
=
{
"X"
:
loss_sum
},
outputs
=
{
"Out"
:
loss
},
attrs
=
{
'scale'
:
1.0
/
float
(
len
(
avg_sum
))},
loss_sum
=
paddle
.
add_n
(
avg_sum
)
loss
=
paddle
.
scale
(
loss_sum
,
scale
=
1.0
/
float
(
len
(
avg_sum
))
)
loss
.
backward
()
...
...
@@ -2392,24 +2364,12 @@ class OpTest(unittest.TestCase):
for
no_grad_val
in
no_grad_set
:
del
inputs
[
no_grad_val
]
if
in_dygraph_mode
():
core
.
eager
.
run_backward
(
paddle
.
utils
.
flatten
(
outputs
),
grad_outputs
,
False
,
)
grad_inputs
=
[]
for
inputs_list
in
inputs
.
values
():
for
inp
in
inputs_list
:
grad_inputs
.
append
(
inp
.
grad
.
numpy
())
return
grad_inputs
else
:
grad_inputs
=
paddle
.
grad
(
outputs
=
paddle
.
utils
.
flatten
(
outputs
),
inputs
=
paddle
.
utils
.
flatten
(
inputs
),
grad_outputs
=
grad_outputs
,
)
return
[
grad
.
numpy
()
for
grad
in
grad_inputs
]
grad_inputs
=
paddle
.
grad
(
outputs
=
paddle
.
utils
.
flatten
(
outputs
),
inputs
=
paddle
.
utils
.
flatten
(
inputs
),
grad_outputs
=
grad_outputs
,
)
return
[
grad
.
numpy
()
for
grad
in
grad_inputs
]
@
staticmethod
def
_numpy_to_lod_tensor
(
np_value
,
lod
,
place
):
...
...
python/paddle/fluid/tests/unittests/test_activation_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -2357,7 +2357,9 @@ class TestSoftRelu(TestActivation):
def
test_check_grad
(
self
):
if
self
.
dtype
==
np
.
float16
:
return
self
.
check_grad
([
'X'
],
'Out'
,
max_relative_error
=
0.02
)
self
.
check_grad
(
[
'X'
],
'Out'
,
max_relative_error
=
0.02
,
check_dygraph
=
False
)
def
elu
(
x
,
alpha
):
...
...
python/paddle/fluid/tests/unittests/test_affine_channel_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -48,16 +48,23 @@ class TestAffineChannelOp(OpTest):
self
.
outputs
=
{
'Out'
:
y
}
def
test_check_output
(
self
):
self
.
check_output
()
self
.
check_output
(
check_dygraph
=
False
)
def
test_check_grad
(
self
):
self
.
check_grad
([
'X'
,
'Scale'
,
'Bias'
],
'Out'
)
self
.
check_grad
([
'X'
,
'Scale'
,
'Bias'
],
'Out'
,
check_dygraph
=
False
)
def
test_check_grad_stopgrad_dx
(
self
):
self
.
check_grad
([
'Scale'
,
'Bias'
],
'Out'
,
no_grad_set
=
set
(
'X'
))
self
.
check_grad
(
[
'Scale'
,
'Bias'
],
'Out'
,
no_grad_set
=
set
(
'X'
),
check_dygraph
=
False
)
def
test_check_grad_stopgrad_dscale_dbias
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
no_grad_set
=
set
([
'Scale'
,
'Bias'
]))
self
.
check_grad
(
[
'X'
],
'Out'
,
no_grad_set
=
set
([
'Scale'
,
'Bias'
]),
check_dygraph
=
False
,
)
def
init_test_case
(
self
):
self
.
shape
=
[
2
,
100
,
3
,
3
]
...
...
python/paddle/fluid/tests/unittests/test_bilinear_interp_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -109,7 +109,6 @@ class TestBilinearInterpOp(OpTest):
self
.
op_type
=
"bilinear_interp"
# NOTE(dev): some AsDispensible input is not used under imperative mode.
# Skip check_dygraph while found them in Inputs.
self
.
check_dygraph
=
True
input_np
=
np
.
random
.
random
(
self
.
input_shape
).
astype
(
"float64"
)
if
self
.
data_layout
==
"NCHW"
:
...
...
@@ -139,10 +138,8 @@ class TestBilinearInterpOp(OpTest):
self
.
inputs
=
{
'X'
:
input_np
}
if
self
.
out_size
is
not
None
:
self
.
inputs
[
'OutSize'
]
=
self
.
out_size
self
.
check_dygraph
=
False
if
self
.
actual_shape
is
not
None
:
self
.
inputs
[
'OutSize'
]
=
self
.
actual_shape
self
.
check_dygraph
=
False
self
.
attrs
=
{
'out_h'
:
self
.
out_h
,
...
...
@@ -156,12 +153,10 @@ class TestBilinearInterpOp(OpTest):
self
.
outputs
=
{
'Out'
:
output_np
}
def
test_check_output
(
self
):
self
.
check_output
(
check_dygraph
=
self
.
check_dygraph
)
self
.
check_output
(
check_dygraph
=
False
)
def
test_check_grad
(
self
):
self
.
check_grad
(
[
'X'
],
'Out'
,
in_place
=
True
,
check_dygraph
=
self
.
check_dygraph
)
self
.
check_grad
([
'X'
],
'Out'
,
in_place
=
True
,
check_dygraph
=
False
)
def
init_test_case
(
self
):
self
.
interp_method
=
'bilinear'
...
...
@@ -285,7 +280,6 @@ class TestBilinearInterpOpUint8(OpTest):
self
.
actual_shape
=
None
self
.
init_test_case
()
self
.
op_type
=
"bilinear_interp"
self
.
check_dygraph
=
True
input_np
=
np
.
random
.
randint
(
low
=
0
,
high
=
256
,
size
=
self
.
input_shape
).
astype
(
"uint8"
)
...
...
@@ -309,7 +303,6 @@ class TestBilinearInterpOpUint8(OpTest):
self
.
inputs
=
{
'X'
:
input_np
}
if
self
.
out_size
is
not
None
:
self
.
inputs
[
'OutSize'
]
=
self
.
out_size
self
.
check_dygraph
=
False
self
.
attrs
=
{
'out_h'
:
self
.
out_h
,
...
...
@@ -323,7 +316,7 @@ class TestBilinearInterpOpUint8(OpTest):
def
test_check_output
(
self
):
self
.
check_output_with_place
(
place
=
core
.
CPUPlace
(),
atol
=
1
,
check_dygraph
=
self
.
check_dygraph
place
=
core
.
CPUPlace
(),
atol
=
1
,
check_dygraph
=
False
)
def
init_test_case
(
self
):
...
...
@@ -427,7 +420,6 @@ class TestBilinearInterpOp_attr_tensor(OpTest):
self
.
actual_shape
=
None
self
.
init_test_case
()
self
.
op_type
=
"bilinear_interp"
self
.
check_dygraph
=
True
self
.
shape_by_1Dtensor
=
False
self
.
scale_by_1Dtensor
=
False
self
.
attrs
=
{
...
...
@@ -450,7 +442,6 @@ class TestBilinearInterpOp_attr_tensor(OpTest):
if
self
.
shape_by_1Dtensor
:
self
.
inputs
[
'OutSize'
]
=
self
.
out_size
self
.
check_dygraph
=
False
elif
self
.
out_size
is
not
None
:
size_tensor
=
[]
for
index
,
ele
in
enumerate
(
self
.
out_size
):
...
...
@@ -458,7 +449,6 @@ class TestBilinearInterpOp_attr_tensor(OpTest):
(
"x"
+
str
(
index
),
np
.
ones
((
1
)).
astype
(
'int32'
)
*
ele
)
)
self
.
inputs
[
'SizeTensor'
]
=
size_tensor
self
.
check_dygraph
=
False
self
.
attrs
[
'out_h'
]
=
self
.
out_h
self
.
attrs
[
'out_w'
]
=
self
.
out_w
...
...
@@ -473,12 +463,10 @@ class TestBilinearInterpOp_attr_tensor(OpTest):
self
.
outputs
=
{
'Out'
:
output_np
}
def
test_check_output
(
self
):
self
.
check_output
(
check_dygraph
=
self
.
check_dygraph
)
self
.
check_output
(
check_dygraph
=
False
)
def
test_check_grad
(
self
):
self
.
check_grad
(
[
'X'
],
'Out'
,
in_place
=
True
,
check_dygraph
=
self
.
check_dygraph
)
self
.
check_grad
([
'X'
],
'Out'
,
in_place
=
True
,
check_dygraph
=
False
)
def
init_test_case
(
self
):
self
.
interp_method
=
'bilinear'
...
...
python/paddle/fluid/tests/unittests/test_box_coder_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -109,7 +109,7 @@ def batch_box_coder(p_box, pb_v, t_box, lod, code_type, norm, axis=0):
class
TestBoxCoderOp
(
OpTest
):
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
setUp
(
self
):
self
.
op_type
=
"box_coder"
...
...
@@ -142,7 +142,7 @@ class TestBoxCoderOp(OpTest):
class
TestBoxCoderOpWithoutBoxVar
(
OpTest
):
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
setUp
(
self
):
self
.
python_api
=
paddle
.
vision
.
ops
.
box_coder
...
...
@@ -176,7 +176,7 @@ class TestBoxCoderOpWithoutBoxVar(OpTest):
class
TestBoxCoderOpWithLoD
(
OpTest
):
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
setUp
(
self
):
self
.
python_api
=
paddle
.
vision
.
ops
.
box_coder
...
...
@@ -207,7 +207,7 @@ class TestBoxCoderOpWithLoD(OpTest):
class
TestBoxCoderOpWithAxis
(
OpTest
):
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
setUp
(
self
):
self
.
python_api
=
paddle
.
vision
.
ops
.
box_coder
...
...
@@ -242,12 +242,55 @@ class TestBoxCoderOpWithAxis(OpTest):
self
.
outputs
=
{
'OutputBox'
:
output_box
}
def
wrapper_box_coder
(
prior_box
,
prior_box_var
=
None
,
target_box
=
None
,
code_type
=
"encode_center_size"
,
box_normalized
=
True
,
axis
=
0
,
variance
=
[],
):
if
isinstance
(
prior_box_var
,
paddle
.
Tensor
):
output_box
=
paddle
.
_C_ops
.
box_coder
(
prior_box
,
prior_box_var
,
target_box
,
code_type
,
box_normalized
,
axis
,
[],
)
elif
isinstance
(
prior_box_var
,
list
):
output_box
=
paddle
.
_C_ops
.
box_coder
(
prior_box
,
None
,
target_box
,
code_type
,
box_normalized
,
axis
,
prior_box_var
,
)
else
:
output_box
=
paddle
.
_C_ops
.
box_coder
(
prior_box
,
None
,
target_box
,
code_type
,
box_normalized
,
axis
,
variance
,
)
return
output_box
class
TestBoxCoderOpWithVariance
(
OpTest
):
def
test_check_output
(
self
):
self
.
check_output
()
def
setUp
(
self
):
self
.
op_type
=
"box_coder"
self
.
python_api
=
wrapper_box_coder
lod
=
[[
1
,
1
,
1
,
1
,
1
]]
prior_box
=
np
.
random
.
random
((
30
,
4
)).
astype
(
'float32'
)
prior_box_var
=
np
.
random
.
random
((
4
)).
astype
(
'float32'
)
...
...
python/paddle/fluid/tests/unittests/test_bpr_loss_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
randomize_probability
from
eager_
op_test
import
OpTest
,
randomize_probability
import
paddle
...
...
@@ -43,11 +43,13 @@ class TestBprLossOp1(OpTest):
def
test_check_output
(
self
):
paddle
.
enable_static
()
self
.
check_output
()
self
.
check_output
(
check_dygraph
=
False
)
paddle
.
disable_static
()
def
test_check_grad
(
self
):
self
.
check_grad
([
"X"
],
"Y"
,
numeric_grad_delta
=
0.001
)
self
.
check_grad
(
[
"X"
],
"Y"
,
numeric_grad_delta
=
0.001
,
check_dygraph
=
False
)
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_broadcast_error.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle.fluid.core
as
core
...
...
python/paddle/fluid/tests/unittests/test_broadcast_tensors_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import random
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -125,7 +125,7 @@ class TestCPUBroadcastTensorsOp(OpTest):
def
test_check_output
(
self
):
self
.
run_dual_test
(
self
.
check_output_with_place
,
{
"place"
:
self
.
place
,
"atol"
:
1e-1
,
"check_eager"
:
True
},
{
"place"
:
self
.
place
,
"atol"
:
1e-1
},
)
def
test_check_grad_normal
(
self
):
...
...
@@ -136,7 +136,6 @@ class TestCPUBroadcastTensorsOp(OpTest):
"inputs_to_check"
:
[
'x0'
,
'x1'
],
"output_names"
:
[
'out0'
,
'out1'
],
"max_relative_error"
:
0.05
,
"check_eager"
:
True
,
},
)
self
.
run_triple_in_test
(
...
...
@@ -146,7 +145,6 @@ class TestCPUBroadcastTensorsOp(OpTest):
"inputs_to_check"
:
[
'x0'
,
'x1'
,
'x2'
],
"output_names"
:
[
'out0'
,
'out1'
,
"out2"
],
"max_relative_error"
:
0.05
,
"check_eager"
:
True
,
},
)
...
...
python/paddle/fluid/tests/unittests/test_center_loss.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -80,10 +80,10 @@ class TestCenterLossOp(OpTest):
pass
def
test_check_output
(
self
):
self
.
check_output
()
self
.
check_output
(
check_dygraph
=
False
)
def
test_check_grad
(
self
):
self
.
check_grad
([
'X'
],
'Loss'
)
self
.
check_grad
([
'X'
],
'Loss'
,
check_dygraph
=
False
)
class
TestCenterLossOpNoUpdate
(
TestCenterLossOp
):
...
...
python/paddle/fluid/tests/unittests/test_class_center_sample_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_op_test
import
OpTest
,
paddle_static_guard
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -114,9 +114,7 @@ class TestClassCenterSampleOp(OpTest):
}
def
test_check_output
(
self
):
self
.
check_output
(
no_check_set
=
[
'SampledLocalClassCenter'
],
check_eager
=
True
)
self
.
check_output
(
no_check_set
=
[
'SampledLocalClassCenter'
])
class
TestClassCenterSampleOpINT32
(
TestClassCenterSampleOp
):
...
...
@@ -149,42 +147,46 @@ class TestClassCenterSampleV2(unittest.TestCase):
self
.
dtype
=
np
.
int64
def
test_static
(
self
):
for
place
in
self
.
places
:
self
.
check_static_result
(
place
=
place
)
with
paddle_static_guard
():
for
place
in
self
.
places
:
self
.
check_static_result
(
place
=
place
)
def
check_static_result
(
self
,
place
):
with
program_guard
(
Program
(),
Program
()):
label_np
=
np
.
random
.
randint
(
0
,
self
.
num_classes
,
(
self
.
batch_size
,),
dtype
=
self
.
dtype
)
label
=
paddle
.
static
.
data
(
name
=
'label'
,
shape
=
[
self
.
batch_size
],
dtype
=
self
.
dtype
)
(
remapped_label
,
sampled_class_index
,
)
=
paddle
.
nn
.
functional
.
class_center_sample
(
label
,
self
.
num_classes
,
self
.
num_samples
)
(
remapped_label_np
,
sampled_class_center_np
,
)
=
class_center_sample_numpy
(
label_np
,
[
self
.
num_classes
],
self
.
num_samples
)
exe
=
paddle
.
fluid
.
Executor
(
place
)
[
remapped_label_res
,
sampled_class_index_res
]
=
exe
.
run
(
paddle
.
fluid
.
default_main_program
(),
feed
=
{
'label'
:
label_np
},
fetch_list
=
[
remapped_label
,
sampled_class_index
],
)
np
.
testing
.
assert_allclose
(
remapped_label_res
,
remapped_label_np
)
np
.
testing
.
assert_allclose
(
sampled_class_index_res
[:
len
(
sampled_class_center_np
[
0
])],
sampled_class_center_np
[
0
],
)
with
paddle_static_guard
():
with
program_guard
(
Program
(),
Program
()):
label_np
=
np
.
random
.
randint
(
0
,
self
.
num_classes
,
(
self
.
batch_size
,),
dtype
=
self
.
dtype
)
label
=
paddle
.
static
.
data
(
name
=
'label'
,
shape
=
[
self
.
batch_size
],
dtype
=
self
.
dtype
)
(
remapped_label
,
sampled_class_index
,
)
=
paddle
.
nn
.
functional
.
class_center_sample
(
label
,
self
.
num_classes
,
self
.
num_samples
)
(
remapped_label_np
,
sampled_class_center_np
,
)
=
class_center_sample_numpy
(
label_np
,
[
self
.
num_classes
],
self
.
num_samples
)
exe
=
paddle
.
fluid
.
Executor
(
place
)
[
remapped_label_res
,
sampled_class_index_res
]
=
exe
.
run
(
paddle
.
fluid
.
default_main_program
(),
feed
=
{
'label'
:
label_np
},
fetch_list
=
[
remapped_label
,
sampled_class_index
],
)
np
.
testing
.
assert_allclose
(
remapped_label_res
,
remapped_label_np
)
np
.
testing
.
assert_allclose
(
sampled_class_index_res
[:
len
(
sampled_class_center_np
[
0
])],
sampled_class_center_np
[
0
],
)
def
test_dynamic
(
self
):
for
place
in
self
.
places
:
...
...
python/paddle/fluid/tests/unittests/test_clip_by_norm_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -45,7 +45,7 @@ class TestClipByNormOp(OpTest):
self
.
outputs
=
{
'Out'
:
output
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
initTestCase
(
self
):
self
.
shape
=
(
100
,)
...
...
@@ -81,9 +81,7 @@ class TestClipByNormOpFp16(TestClipByNormOp):
if
core
.
is_compiled_with_cuda
():
place
=
core
.
CUDAPlace
(
0
)
if
core
.
is_float16_supported
(
place
):
self
.
check_output_with_place
(
place
,
atol
=
0.001
,
check_eager
=
True
)
self
.
check_output_with_place
(
place
,
atol
=
0.001
)
class
TestClipByNormOpFp16Case1
(
TestClipByNormOpFp16
):
...
...
python/paddle/fluid/tests/unittests/test_clip_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -52,12 +52,12 @@ class TestClipOp(OpTest):
def
test_check_output
(
self
):
paddle
.
enable_static
()
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
paddle
.
disable_static
()
def
test_check_grad_normal
(
self
):
paddle
.
enable_static
()
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
paddle
.
disable_static
()
def
initTestCase
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_coalesce_tensor_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -160,7 +160,10 @@ class TestAllocContinuousSpace(OpTest):
def
test_check_output
(
self
):
self
.
check_output_with_place
(
place
=
core
.
CUDAPlace
(
0
),
no_check_set
=
[
"FusedOutput"
],
atol
=
1e-5
place
=
core
.
CUDAPlace
(
0
),
no_check_set
=
[
"FusedOutput"
],
atol
=
1e-5
,
check_dygraph
=
False
,
)
self
.
verify_output
(
core
.
CUDAPlace
(
0
))
...
...
@@ -180,7 +183,10 @@ class TestAllocContinuousSpace2(TestAllocContinuousSpace):
def
test_check_output
(
self
):
self
.
check_output_with_place
(
place
=
core
.
CUDAPlace
(
0
),
no_check_set
=
[
"FusedOutput"
],
atol
=
1e-5
place
=
core
.
CUDAPlace
(
0
),
no_check_set
=
[
"FusedOutput"
],
atol
=
1e-5
,
check_dygraph
=
False
,
)
self
.
verify_output
(
core
.
CUDAPlace
(
0
))
...
...
python/paddle/fluid/tests/unittests/test_collect_fpn_proposals_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestCollectFPNProposalstOp
(
OpTest
):
...
...
python/paddle/fluid/tests/unittests/test_compare_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -36,7 +36,7 @@ def create_test_class(op_type, typename, callback):
self
.
op_type
=
op_type
def
test_output
(
self
):
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
def
test_errors
(
self
):
paddle
.
enable_static
()
...
...
python/paddle/fluid/tests/unittests/test_compare_reduce_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -32,7 +32,7 @@ def create_test_not_equal_class(op_type, typename, callback):
self
.
op_type
=
op_type
def
test_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
cls_name
=
"{0}_{1}_{2}"
.
format
(
op_type
,
typename
,
'not_equal_all'
)
Cls
.
__name__
=
cls_name
...
...
@@ -51,7 +51,7 @@ def create_test_not_shape_equal_class(op_type, typename, callback):
self
.
op_type
=
op_type
def
test_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
cls_name
=
"{0}_{1}_{2}"
.
format
(
op_type
,
typename
,
'not_shape_equal_all'
)
Cls
.
__name__
=
cls_name
...
...
@@ -69,7 +69,7 @@ def create_test_equal_class(op_type, typename, callback):
self
.
op_type
=
op_type
def
test_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
cls_name
=
"{0}_{1}_{2}"
.
format
(
op_type
,
typename
,
'equal_all'
)
Cls
.
__name__
=
cls_name
...
...
@@ -89,7 +89,7 @@ def create_test_dim1_class(op_type, typename, callback):
self
.
op_type
=
op_type
def
test_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
cls_name
=
"{0}_{1}_{2}"
.
format
(
op_type
,
typename
,
'equal_all'
)
Cls
.
__name__
=
cls_name
...
...
python/paddle/fluid/tests/unittests/test_complex_abs.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.dygraph
as
dg
...
...
@@ -45,7 +45,7 @@ class TestComplexAbsOp(OpTest):
self
.
grad_x
=
self
.
grad_out
*
(
self
.
x
/
np
.
abs
(
self
.
x
))
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
...
...
@@ -53,7 +53,6 @@ class TestComplexAbsOp(OpTest):
'Out'
,
user_defined_grads
=
[
self
.
grad_x
],
user_defined_grad_outputs
=
[
self
.
grad_out
],
check_eager
=
False
,
)
...
...
@@ -81,7 +80,7 @@ class TestComplexAbsOpZeroValues(OpTest):
self
.
grad_x
=
np
.
zeros
(
self
.
shape
,
self
.
dtype
)
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
...
...
@@ -89,7 +88,6 @@ class TestComplexAbsOpZeroValues(OpTest):
'Out'
,
user_defined_grads
=
[
self
.
grad_x
],
user_defined_grad_outputs
=
[
self
.
grad_out
],
check_eager
=
False
,
)
...
...
@@ -131,7 +129,7 @@ class TestRealAbsOp(OpTest):
self
.
grad_x
=
self
.
grad_out
*
(
self
.
x
/
np
.
abs
(
self
.
x
))
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
...
...
@@ -139,7 +137,6 @@ class TestRealAbsOp(OpTest):
'Out'
,
user_defined_grads
=
[
self
.
grad_x
],
user_defined_grad_outputs
=
[
self
.
grad_out
],
check_eager
=
False
,
)
...
...
python/paddle/fluid/tests/unittests/test_complex_view_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
from
paddle
import
static
...
...
@@ -46,7 +46,7 @@ class TestViewAsComplexOp(OpTest):
self
.
outputs
=
{
'Out'
:
out_ref
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
...
...
@@ -54,7 +54,6 @@ class TestViewAsComplexOp(OpTest):
'Out'
,
user_defined_grads
=
[
ref_view_as_real
(
self
.
out_grad
)],
user_defined_grad_outputs
=
[
self
.
out_grad
],
check_eager
=
True
,
)
...
...
@@ -71,7 +70,7 @@ class TestViewAsRealOp(OpTest):
self
.
out_grad
=
np
.
ones
([
10
,
10
,
2
],
dtype
=
"float64"
)
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
...
...
@@ -79,7 +78,6 @@ class TestViewAsRealOp(OpTest):
'Out'
,
user_defined_grads
=
[
ref_view_as_complex
(
self
.
out_grad
)],
user_defined_grad_outputs
=
[
self
.
out_grad
],
check_eager
=
True
,
)
...
...
python/paddle/fluid/tests/unittests/test_concat_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -58,7 +58,7 @@ class TestConcatOp(OpTest):
place
=
core
.
CUDAPlace
(
0
)
self
.
check_output_with_place
(
place
)
else
:
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
if
self
.
dtype
==
np
.
uint16
:
...
...
@@ -67,9 +67,9 @@ class TestConcatOp(OpTest):
self
.
check_grad_with_place
(
place
,
[
'x1'
],
'Out'
,
check_prim
=
True
)
self
.
check_grad_with_place
(
place
,
[
'x2'
],
'Out'
,
check_prim
=
True
)
else
:
self
.
check_grad
([
'x0'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'x1'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'x2'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'x0'
],
'Out'
,
check_prim
=
True
)
self
.
check_grad
([
'x1'
],
'Out'
,
check_prim
=
True
)
self
.
check_grad
([
'x2'
],
'Out'
,
check_prim
=
True
)
def
init_test_data
(
self
):
if
self
.
dtype
==
np
.
uint16
:
...
...
@@ -157,12 +157,12 @@ class TestConcatOp6(TestConcatOp):
self
.
outputs
=
{
'Out'
:
(
out
,
self
.
out_lod
)}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
'x0'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'x1'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'x2'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'x0'
],
'Out'
)
self
.
check_grad
([
'x1'
],
'Out'
)
self
.
check_grad
([
'x2'
],
'Out'
)
def
init_test_data
(
self
):
self
.
x0
=
np
.
random
.
random
([
100
]).
astype
(
self
.
dtype
)
...
...
@@ -197,12 +197,12 @@ class TestConcatOp7(TestConcatOp):
return
"float64"
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
'x0'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'x1'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'x2'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'x0'
],
'Out'
,
check_prim
=
True
)
self
.
check_grad
([
'x1'
],
'Out'
,
check_prim
=
True
)
self
.
check_grad
([
'x2'
],
'Out'
,
check_prim
=
True
)
def
init_test_data
(
self
):
if
self
.
dtype
==
np
.
uint16
:
...
...
python/paddle/fluid/tests/unittests/test_conj_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -20,8 +20,8 @@ import numpy as np
import
paddle
sys
.
path
.
append
(
".."
)
from
eager_op_test
import
OpTest
from
numpy.random
import
random
as
rand
from
op_test
import
OpTest
import
paddle.fluid.dygraph
as
dg
import
paddle.static
as
static
...
...
@@ -56,7 +56,7 @@ class TestConjOp(OpTest):
self
.
grad_in
=
np
.
conj
(
self
.
grad_out
)
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
(
...
...
@@ -64,7 +64,6 @@ class TestConjOp(OpTest):
'Out'
,
user_defined_grads
=
[
self
.
grad_in
],
user_defined_grad_outputs
=
[
self
.
grad_out
],
check_eager
=
True
,
)
...
...
python/paddle/fluid/tests/unittests/test_conv2d_transpose_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -21,7 +21,7 @@ import paddle
import
paddle.nn
as
nn
paddle
.
enable_static
()
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
test_attribute_var
import
UnittestBase
import
paddle.fluid
as
fluid
...
...
@@ -137,6 +137,36 @@ def conv2dtranspose_forward_naive(input_, filter_, attrs):
return
out
def
conv2dtranspose_wrapper
(
x
,
weight
,
stride
=
1
,
padding
=
0
,
output_padding
=
[],
output_size
=
[],
padding_algorithm
=
"EXPLICIT"
,
groups
=
1
,
dilation
=
1
,
data_format
=
"NCDHW"
,
):
if
data_format
==
"AnyLayout"
:
data_format
=
"NCDHW"
if
padding_algorithm
is
None
:
padding_algorithm
=
"EXPLICIT"
return
paddle
.
_C_ops
.
conv2d_transpose
(
x
,
weight
,
stride
,
padding
,
output_padding
,
output_size
,
padding_algorithm
,
groups
,
dilation
,
data_format
,
)
class
TestConv2DTransposeOp
(
OpTest
):
def
setUp
(
self
):
# init as conv transpose
...
...
@@ -244,6 +274,7 @@ class TestConv2DTransposeOp(OpTest):
def
init_op_type
(
self
):
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
class
TestWithSymmetricPad
(
TestConv2DTransposeOp
):
...
...
@@ -453,6 +484,7 @@ class TestCUDNN(TestConv2DTransposeOp):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -471,6 +503,7 @@ class TestCUDNNWithSymmetricPad(TestWithSymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -489,6 +522,7 @@ class TestCUDNNWithAsymmetricPad(TestWithAsymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -507,6 +541,7 @@ class TestCUDNNWithSAMEPad(TestWithSAMEPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -525,6 +560,7 @@ class TestCUDNNWithVALIDPad(TestWithVALIDPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -543,6 +579,7 @@ class TestCUDNNWithStride(TestWithStride):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -561,6 +598,7 @@ class TestCUDNNWithGroups(TestWithGroups):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
# ------------ test_cudnn ------------
...
...
@@ -571,6 +609,7 @@ class TestCUDNNWithEvenUpsample(TestWithEvenUpsample):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
# Please Don't remove the following code.
...
...
@@ -605,6 +644,7 @@ class TestCUDNN_NHWC(TestConv2DTransposeOp):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -624,6 +664,7 @@ class TestCUDNNWithSymmetricPad_NHWC(TestWithSymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -643,6 +684,7 @@ class TestCUDNNWithAsymmetricPad_NHWC(TestWithSymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -662,6 +704,7 @@ class TestCUDNNWithStride_NHWC(TestWithStride):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -681,6 +724,7 @@ class TestCUDNNWithGroups_NHWC(TestWithGroups):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -701,6 +745,7 @@ class TestCUDNNWithEvenUpsample_NHWC(TestWithEvenUpsample):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -721,6 +766,7 @@ class TestCUDNN_FP16(TestConv2DTransposeOp):
self
.
need_check_grad
=
False
self
.
use_cudnn
=
True
self
.
op_type
=
"conv2d_transpose"
self
.
python_api
=
conv2dtranspose_wrapper
def
test_check_output
(
self
):
if
self
.
use_cudnn
:
...
...
python/paddle/fluid/tests/unittests/test_conv3d_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_op_test
import
OpTest
,
paddle_static_guard
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -278,9 +278,36 @@ def create_test_cudnn_channel_last_class(parent):
globals
()[
cls_name
]
=
TestCudnnChannelLastCase
def
conv3d_wrapper
(
x
,
weight
,
stride
=
1
,
padding
=
0
,
padding_algorithm
=
"EXPLICIT"
,
groups
=
1
,
dilation
=
1
,
data_format
=
"NCDHW"
,
):
if
data_format
==
"AnyLayout"
:
data_format
=
"NCDHW"
if
padding_algorithm
is
None
:
padding_algorithm
=
"EXPLICIT"
return
paddle
.
_C_ops
.
conv3d
(
x
,
weight
,
stride
,
padding
,
padding_algorithm
,
groups
,
dilation
,
data_format
,
)
class
TestConv3DOp
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"conv3d"
self
.
python_api
=
conv3d_wrapper
self
.
use_cudnn
=
False
self
.
use_mkldnn
=
False
self
.
data_format
=
"AnyLayout"
...
...
@@ -596,6 +623,7 @@ class TestCUDNNExhaustiveSearch(TestCUDNN):
class
TestConv3DOp_2
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"conv3d"
self
.
python_api
=
conv3d_wrapper
self
.
use_cudnn
=
False
self
.
use_mkldnn
=
False
self
.
data_format
=
"NCDHW"
...
...
@@ -863,227 +891,227 @@ create_test_cudnn_channel_last_class(TestWith1x1_AsyPadding)
# --------- test python API ---------------
class
TestConv3DAPI
(
unittest
.
TestCase
):
def
test_api
(
self
):
with
paddle_static_guard
():
input_NDHWC
=
paddle
.
static
.
data
(
name
=
"input_NDHWC"
,
shape
=
[
2
,
5
,
5
,
5
,
3
],
dtype
=
"float32"
,
)
input_NDHWC
=
paddle
.
static
.
data
(
name
=
"input_NDHWC"
,
shape
=
[
2
,
5
,
5
,
5
,
3
],
dtype
=
"float32"
,
)
input_NCDHW
=
paddle
.
static
.
data
(
name
=
"input_NCDHW"
,
shape
=
[
2
,
3
,
5
,
5
,
3
],
dtype
=
"float32"
,
)
paddle
.
static
.
nn
.
conv3d
(
input
=
input_NDHWC
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
0
,
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
data_format
=
"NCDHW"
,
)
paddle
.
static
.
nn
.
conv3d
(
input
=
input_NCDHW
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
[
1
,
2
,
1
,
0
,
1
,
0
],
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
data_format
=
"NCDHW"
,
)
paddle
.
static
.
nn
.
conv3d
(
input
=
input_NCDHW
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
[[
0
,
0
],
[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
1
,
1
]],
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
data_format
=
"NCDHW"
,
)
paddle
.
static
.
nn
.
conv3d
(
input
=
input_NDHWC
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
[[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
1
,
1
],
[
0
,
0
]],
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
data_format
=
"NDHWC"
,
)
paddle
.
static
.
nn
.
conv3d
(
input
=
input_NCDHW
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
"SAME"
,
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
data_format
=
"NCDHW"
,
)
paddle
.
static
.
nn
.
conv3d
(
input
=
input_NCDHW
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
"VALID"
,
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
data_format
=
"NCDHW"
,
)
class
TestConv3DAPI_Error
(
unittest
.
TestCase
):
def
test_api
(
self
):
input
=
paddle
.
static
.
data
(
name
=
"input"
,
shape
=
[
2
,
5
,
5
,
5
,
4
],
dtype
=
"float32"
,
)
input_NCDHW
=
paddle
.
static
.
data
(
name
=
"input_NCDHW"
,
shape
=
[
2
,
3
,
5
,
5
,
3
],
dtype
=
"float32"
,
)
# ValueError: cudnn
def
run_1
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
input
=
input
_NDHWC
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
filter_size
=
[
3
,
3
,
3
]
,
stride
=
[
1
,
1
,
1
]
,
padding
=
0
,
dilation
=
1
,
dilation
=
[
1
,
1
,
1
]
,
groups
=
1
,
use_cudnn
=
[
0
],
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_1
)
# ValueError: data_format
def
run_2
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
input
=
input
_NCDHW
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
0
,
padding
=
[
1
,
2
,
1
,
0
,
1
,
0
]
,
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NCHWC"
,
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_2
)
# ValueError: padding
def
run_3
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
input
=
input
_NCDHW
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
"SAMEE"
,
dilation
=
1
,
filter_size
=
[
3
,
3
,
3
]
,
stride
=
[
1
,
1
,
1
]
,
padding
=
[[
0
,
0
],
[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
1
,
1
]]
,
dilation
=
[
1
,
1
,
1
]
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_3
)
def
run_4
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
input
=
input
_NDHWC
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
[[
0
,
1
],
[
0
,
0
],
[
0
,
1
],
[
0
,
1
],
[
0
,
1
]],
dilation
=
1
,
filter_size
=
[
3
,
3
,
3
]
,
stride
=
[
1
,
1
,
1
]
,
padding
=
[[
0
,
0
],
[
1
,
1
],
[
1
,
1
],
[
1
,
1
],
[
0
,
0
]],
dilation
=
[
1
,
1
,
1
]
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NCDHW"
,
data_format
=
"NDHWC"
,
)
self
.
assertRaises
(
ValueError
,
run_4
)
def
run_5
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
input
=
input
_NCDHW
,
num_filters
=
3
,
filter_size
=
0
,
stride
=
0
,
padding
=
[[
0
,
1
],
[
0
,
1
],
[
0
,
1
],
[
0
,
1
],
[
0
,
1
]]
,
dilation
=
1
,
filter_size
=
[
3
,
3
,
3
]
,
stride
=
[
1
,
1
,
1
]
,
padding
=
"SAME"
,
dilation
=
[
1
,
1
,
1
]
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_5
)
# ValueError: channel dimmention
x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
2
,
5
,
5
,
5
,
-
1
],
dtype
=
"float32"
,
)
def
run_6
():
paddle
.
static
.
nn
.
conv3d
(
input
=
x
,
input
=
input_NCDHW
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
0
,
dilation
=
1
,
filter_size
=
[
3
,
3
,
3
]
,
stride
=
[
1
,
1
,
1
]
,
padding
=
"VALID"
,
dilation
=
[
1
,
1
,
1
]
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_6
)
# ValueError: groups
def
run_7
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
0
,
dilation
=
1
,
groups
=
3
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
class
TestConv3DAPI_Error
(
unittest
.
TestCase
):
def
test_api
(
self
):
with
paddle_static_guard
():
input
=
paddle
.
static
.
data
(
name
=
"input"
,
shape
=
[
2
,
5
,
5
,
5
,
4
],
dtype
=
"float32"
,
)
self
.
assertRaises
(
ValueError
,
run_7
)
# ValueError: filter num
def
run_8
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
0
,
filter_size
=
0
,
stride
=
0
,
padding
=
0
,
dilation
=
0
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
# ValueError: cudnn
def
run_1
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
0
,
dilation
=
1
,
groups
=
1
,
use_cudnn
=
[
0
],
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_1
)
# ValueError: data_format
def
run_2
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
[
3
,
3
,
3
],
stride
=
[
1
,
1
,
1
],
padding
=
0
,
dilation
=
[
1
,
1
,
1
],
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NCHWC"
,
)
self
.
assertRaises
(
ValueError
,
run_2
)
# ValueError: padding
def
run_3
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
"SAMEE"
,
dilation
=
1
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_3
)
def
run_4
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
[[
0
,
1
],
[
0
,
0
],
[
0
,
1
],
[
0
,
1
],
[
0
,
1
]],
dilation
=
1
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NCDHW"
,
)
self
.
assertRaises
(
ValueError
,
run_4
)
def
run_5
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
0
,
stride
=
0
,
padding
=
[[
0
,
1
],
[
0
,
1
],
[
0
,
1
],
[
0
,
1
],
[
0
,
1
]],
dilation
=
1
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
)
self
.
assertRaises
(
ValueError
,
run_5
)
# ValueError: channel dimmention
x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
2
,
5
,
5
,
5
,
-
1
],
dtype
=
"float32"
,
)
self
.
assertRaises
(
ValueError
,
run_8
)
def
run_6
():
paddle
.
static
.
nn
.
conv3d
(
input
=
x
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
0
,
dilation
=
1
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
)
self
.
assertRaises
(
ValueError
,
run_6
)
# ValueError: groups
def
run_7
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
3
,
filter_size
=
3
,
stride
=
1
,
padding
=
0
,
dilation
=
1
,
groups
=
3
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
)
self
.
assertRaises
(
ValueError
,
run_7
)
# ValueError: filter num
def
run_8
():
paddle
.
static
.
nn
.
conv3d
(
input
=
input
,
num_filters
=
0
,
filter_size
=
0
,
stride
=
0
,
padding
=
0
,
dilation
=
0
,
groups
=
1
,
use_cudnn
=
False
,
data_format
=
"NDHWC"
,
)
self
.
assertRaises
(
ValueError
,
run_8
)
if
__name__
==
'__main__'
:
paddle
.
enable_static
()
unittest
.
main
()
python/paddle/fluid/tests/unittests/test_conv3d_transpose_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -19,7 +19,7 @@ import numpy as np
import
paddle
paddle
.
enable_static
()
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle.fluid.core
as
core
...
...
@@ -134,6 +134,34 @@ def conv3dtranspose_forward_naive(input_, filter_, attrs):
return
out
def
conv3d_transpose_wrapper
(
x
,
weight
,
stride
=
1
,
padding
=
0
,
output_padding
=
[],
output_size
=
[],
padding_algorithm
=
"EXPLICIT"
,
groups
=
1
,
dilation
=
1
,
data_format
=
"NCDHW"
,
):
if
data_format
==
"AnyLayout"
:
data_format
=
"NCDHW"
return
paddle
.
_C_ops
.
conv3d_transpose
(
x
,
weight
,
stride
,
padding
,
output_padding
,
output_size
,
padding_algorithm
,
groups
,
dilation
,
data_format
,
)
class
TestConv3DTransposeOp
(
OpTest
):
def
setUp
(
self
):
# init as conv transpose
...
...
@@ -234,6 +262,7 @@ class TestConv3DTransposeOp(OpTest):
def
init_op_type
(
self
):
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
class
TestWithSymmetricPad
(
TestConv3DTransposeOp
):
...
...
@@ -335,6 +364,7 @@ class TestCUDNN(TestConv3DTransposeOp):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -353,6 +383,7 @@ class TestCUDNNWithSymmetricPad(TestWithSymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -371,6 +402,7 @@ class TestCUDNNWithAsymmetricPad(TestWithAsymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -389,6 +421,7 @@ class TestCUDNNWithSAMEPad(TestWithSAMEPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -407,6 +440,7 @@ class TestCUDNNWithVALIDPad(TestWithVALIDPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -425,6 +459,7 @@ class TestCUDNNWithStride(TestWithStride):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -443,21 +478,22 @@ class TestCUDNNWithGroups(TestWithGroups):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
# Please Don't remove the following code.
# Currently, CI use cudnn V5.0 which not support dilation conv.
# class TestCUDNNWithDilation(TestWithDilation):
# def init_test_case(self):
# self.pad = [1, 1, 1]
# self.stride = [2, 2, 2]
# self.dilations = [2, 2, 2]
# self.input_size = [2, 3, 5, 5, 5] # NCDHW
# f_c = self.input_size[1]
# self.filter_size = [f_c, 6, 3, 3, 3]
#
# def init_op_type(self):
# self.op_type = "conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
# Please Don't remove the following code.
# Currently, CI use cudnn V5.0 which not support dilation conv.
# class TestCUDNNWithDilation(TestWithDilation):
# def init_test_case(self):
# self.pad = [1, 1, 1]
# self.stride = [2, 2, 2]
# self.dilations = [2, 2, 2]
# self.input_size = [2, 3, 5, 5, 5] # NCDHW
# f_c = self.input_size[1]
# self.filter_size = [f_c, 6, 3, 3, 3]
#
# def init_op_type(self):
# self.op_type = "conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -477,6 +513,7 @@ class TestCUDNN_NHWC(TestConv3DTransposeOp):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -496,6 +533,7 @@ class TestCUDNNWithSymmetricPad_NHWC(TestWithSymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -515,6 +553,7 @@ class TestCUDNNWithAsymmetricPad_NHWC(TestWithAsymmetricPad):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -534,6 +573,7 @@ class TestCUDNNWithStride_NHWC(TestWithStride):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
@
unittest
.
skipIf
(
...
...
@@ -553,6 +593,7 @@ class TestCUDNNWithGroups_NHWC(TestWithGroups):
def
init_op_type
(
self
):
self
.
use_cudnn
=
True
self
.
op_type
=
"conv3d_transpose"
self
.
python_api
=
conv3d_transpose_wrapper
class
TestConv3dTranspose
(
unittest
.
TestCase
):
...
...
python/paddle/fluid/tests/unittests/test_conv_shift_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
def
conv_shift_forward
(
x
,
y
):
...
...
python/paddle/fluid/tests/unittests/test_cos_sim_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestCosSimOp
(
OpTest
):
...
...
python/paddle/fluid/tests/unittests/test_crf_decoding_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import random
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
CRFDecoding
:
...
...
python/paddle/fluid/tests/unittests/test_crop_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -81,10 +81,10 @@ class TestCropOp(OpTest):
self
.
offsets
=
[
1
,
2
]
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
class
TestCase1
(
TestCropOp
):
...
...
python/paddle/fluid/tests/unittests/test_crop_tensor_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -82,10 +82,10 @@ class TestCropTensorOp(OpTest):
self
.
offsets
=
[
1
,
2
]
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
class
TestCase1
(
TestCropTensorOp
):
...
...
@@ -183,10 +183,10 @@ class TestCropTensorOpTensorAttr(OpTest):
self
.
shape_attr
=
[
0
,
0
]
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
([
"X"
],
"Out"
,
check_eager
=
True
)
self
.
check_grad
([
"X"
],
"Out"
)
class
TestCropTensorOpTensorAttrCase1
(
TestCropTensorOpTensorAttr
):
...
...
python/paddle/fluid/tests/unittests/test_cross_entropy2_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
CrossEntropy2OpTestBase
(
OpTest
):
...
...
python/paddle/fluid/tests/unittests/test_cross_entropy_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
randomize_probability
from
eager_op_test
import
OpTest
,
paddle_static_guard
,
randomize_probability
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -427,17 +427,18 @@ class TestCrossEntropyOpError(unittest.TestCase):
self
.
assertRaises
(
TypeError
,
test_Variable
)
def
test_dtype
():
# the input dtype of cross_entropy must be float16 or float32 or float64
# float16 only can be set on GPU place
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
-
1
,
3
,
4
,
5
,
6
],
dtype
=
"int32"
)
lab2
=
paddle
.
static
.
data
(
name
=
'lab2'
,
shape
=
[
-
1
,
3
,
4
,
5
,
6
],
dtype
=
"int32"
)
paddle
.
nn
.
functional
.
cross_entropy
(
x2
,
lab2
,
reduction
=
'none'
,
use_softmax
=
False
)
with
paddle_static_guard
():
# the input dtype of cross_entropy must be float16 or float32 or float64
# float16 only can be set on GPU place
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
-
1
,
3
,
4
,
5
,
6
],
dtype
=
"int32"
)
lab2
=
paddle
.
static
.
data
(
name
=
'lab2'
,
shape
=
[
-
1
,
3
,
4
,
5
,
6
],
dtype
=
"int32"
)
paddle
.
nn
.
functional
.
cross_entropy
(
x2
,
lab2
,
reduction
=
'none'
,
use_softmax
=
False
)
self
.
assertRaises
(
TypeError
,
test_dtype
)
...
...
python/paddle/fluid/tests/unittests/test_cross_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -47,10 +47,10 @@ class TestCrossOp(OpTest):
self
.
outputs
=
{
'Out'
:
np
.
array
(
z_list
).
reshape
(
self
.
shape
)}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
class
TestCrossOpCase1
(
TestCrossOp
):
...
...
python/paddle/fluid/tests/unittests/test_cumprod_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import random
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -109,7 +109,7 @@ class TestCumprod(OpTest):
for
dim
in
range
(
-
len
(
self
.
shape
),
len
(
self
.
shape
)):
for
zero_num
in
self
.
zero_nums
:
self
.
prepare_inputs_outputs_attrs
(
dim
,
zero_num
)
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
# test backward.
def
test_check_grad
(
self
):
...
...
@@ -118,14 +118,13 @@ class TestCumprod(OpTest):
self
.
prepare_inputs_outputs_attrs
(
dim
,
zero_num
)
self
.
init_grad_input_output
(
dim
)
if
self
.
dtype
==
np
.
float64
:
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
else
:
self
.
check_grad
(
[
'X'
],
'Out'
,
user_defined_grads
=
[
self
.
grad_x
],
user_defined_grad_outputs
=
[
self
.
grad_out
],
check_eager
=
True
,
)
...
...
python/paddle/fluid/tests/unittests/test_cvm_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import unittest
from
math
import
log
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
def
cvm_compute
(
X
,
item_width
,
use_cvm
):
...
...
python/paddle/fluid/tests/unittests/test_decayed_adagrad_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestDecayedAdagradOp1
(
OpTest
):
...
...
@@ -46,7 +46,7 @@ class TestDecayedAdagradOp1(OpTest):
self
.
outputs
=
{
'ParamOut'
:
param_out
,
'MomentOut'
:
moment_out
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
TestDecayedAdagradOp2
(
OpTest
):
...
...
@@ -77,7 +77,7 @@ class TestDecayedAdagradOp2(OpTest):
self
.
outputs
=
{
'ParamOut'
:
param_out
,
'MomentOut'
:
moment_out
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_deformable_conv_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -194,14 +194,13 @@ class TestModulatedDeformableConvOp(OpTest):
self
.
outputs
=
{
'Output'
:
output
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
{
'Input'
,
'Offset'
,
'Mask'
,
'Filter'
},
'Output'
,
max_relative_error
=
0.05
,
check_eager
=
True
,
)
def
init_test_case
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_deformable_conv_v1_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -188,14 +188,13 @@ class TestModulatedDeformableConvOp(OpTest):
self
.
outputs
=
{
'Output'
:
output
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
[
'Input'
,
'Offset'
,
'Filter'
],
'Output'
,
max_relative_error
=
0.05
,
check_eager
=
True
,
)
def
test_check_grad_no_filter
(
self
):
...
...
@@ -204,7 +203,6 @@ class TestModulatedDeformableConvOp(OpTest):
'Output'
,
max_relative_error
=
0.1
,
no_grad_set
=
set
([
'Filter'
]),
check_eager
=
True
,
)
def
init_test_case
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_density_prior_box_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import math
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestDensityPriorBoxOp
(
OpTest
):
...
...
python/paddle/fluid/tests/unittests/test_dequantize_abs_max_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import math
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
def
quantize_max_abs
(
x
,
max_range
):
...
...
python/paddle/fluid/tests/unittests/test_dequantize_log_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
def
dequantize_log
(
x
,
dict_data
):
...
...
python/paddle/fluid/tests/unittests/test_determinant_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -30,10 +30,10 @@ class TestDeterminantOp(OpTest):
self
.
outputs
=
{
'Out'
:
self
.
target
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
'Input'
],
[
'Out'
]
,
check_eager
=
True
)
self
.
check_grad
([
'Input'
],
[
'Out'
])
def
init_data
(
self
):
np
.
random
.
seed
(
0
)
...
...
@@ -95,13 +95,11 @@ class TestSlogDeterminantOp(OpTest):
self
.
outputs
=
{
'Out'
:
self
.
target
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
# the slog det's grad value is always huge
self
.
check_grad
(
[
'Input'
],
[
'Out'
],
max_relative_error
=
0.1
,
check_eager
=
True
)
self
.
check_grad
([
'Input'
],
[
'Out'
],
max_relative_error
=
0.1
)
def
init_data
(
self
):
np
.
random
.
seed
(
0
)
...
...
python/paddle/fluid/tests/unittests/test_diag.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
python/paddle/fluid/tests/unittests/test_diag_embed.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_op_test
import
OpTest
,
paddle_static_guard
import
paddle.fluid
as
fluid
import
paddle.fluid.core
as
core
...
...
@@ -30,7 +30,7 @@ class TestDiagEmbedOp(OpTest):
self
.
outputs
=
{
'Out'
:
self
.
target
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
init_config
(
self
):
self
.
case
=
np
.
random
.
randn
(
2
,
3
).
astype
(
'float32'
)
...
...
@@ -51,27 +51,28 @@ class TestDiagEmbedOpCase1(TestDiagEmbedOp):
class
TestDiagEmbedAPICase
(
unittest
.
TestCase
):
def
test_case1
(
self
):
diag_embed
=
np
.
random
.
randn
(
2
,
3
,
4
).
astype
(
'float32'
)
data1
=
fluid
.
data
(
name
=
'data1'
,
shape
=
[
2
,
3
,
4
],
dtype
=
'float32'
)
out1
=
F
.
diag_embed
(
data1
)
out2
=
F
.
diag_embed
(
data1
,
offset
=
1
,
dim1
=-
2
,
dim2
=
3
)
with
paddle_static_guard
():
diag_embed
=
np
.
random
.
randn
(
2
,
3
,
4
).
astype
(
'float32'
)
data1
=
fluid
.
data
(
name
=
'data1'
,
shape
=
[
2
,
3
,
4
],
dtype
=
'float32'
)
out1
=
F
.
diag_embed
(
data1
)
out2
=
F
.
diag_embed
(
data1
,
offset
=
1
,
dim1
=-
2
,
dim2
=
3
)
place
=
core
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
results
=
exe
.
run
(
fluid
.
default_main_program
(),
feed
=
{
"data1"
:
diag_embed
},
fetch_list
=
[
out1
,
out2
],
return_numpy
=
True
,
)
target1
=
np
.
stack
(
[
np
.
stack
([
np
.
diag
(
s
,
0
)
for
s
in
r
],
0
)
for
r
in
diag_embed
],
0
)
target2
=
np
.
stack
(
[
np
.
stack
([
np
.
diag
(
s
,
1
)
for
s
in
r
],
0
)
for
r
in
diag_embed
],
0
)
np
.
testing
.
assert_allclose
(
results
[
0
],
target1
,
rtol
=
1e-05
)
np
.
testing
.
assert_allclose
(
results
[
1
],
target2
,
rtol
=
1e-05
)
place
=
core
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
results
=
exe
.
run
(
fluid
.
default_main_program
(),
feed
=
{
"data1"
:
diag_embed
},
fetch_list
=
[
out1
,
out2
],
return_numpy
=
True
,
)
target1
=
np
.
stack
(
[
np
.
stack
([
np
.
diag
(
s
,
0
)
for
s
in
r
],
0
)
for
r
in
diag_embed
],
0
)
target2
=
np
.
stack
(
[
np
.
stack
([
np
.
diag
(
s
,
1
)
for
s
in
r
],
0
)
for
r
in
diag_embed
],
0
)
np
.
testing
.
assert_allclose
(
results
[
0
],
target1
,
rtol
=
1e-05
)
np
.
testing
.
assert_allclose
(
results
[
1
],
target2
,
rtol
=
1e-05
)
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_diag_v2.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -41,11 +41,11 @@ class TestDiagV2Op(OpTest):
def
test_check_output
(
self
):
paddle
.
enable_static
()
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
def
test_check_grad
(
self
):
paddle
.
enable_static
()
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
False
)
self
.
check_grad
([
'X'
],
'Out'
)
def
init_config
(
self
):
pass
...
...
python/paddle/fluid/tests/unittests/test_diagonal_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -30,10 +30,10 @@ class TestDiagonalOp(OpTest):
self
.
outputs
=
{
'Out'
:
self
.
target
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
'Input'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'Input'
],
'Out'
)
def
init_config
(
self
):
self
.
case
=
np
.
random
.
randn
(
10
,
5
,
2
).
astype
(
'float64'
)
...
...
@@ -80,7 +80,6 @@ class TestDiagonalOpCase2(TestDiagonalOp):
'Out'
,
user_defined_grads
=
[
self
.
grad_x
],
user_defined_grad_outputs
=
[
self
.
grad_out
],
check_eager
=
True
,
)
...
...
python/paddle/fluid/tests/unittests/test_digamma_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
scipy.special
import
psi
import
paddle
...
...
@@ -42,10 +42,10 @@ class TestDigammaOp(OpTest):
self
.
dtype
=
np
.
float64
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
class
TestDigammaOpFp32
(
TestDigammaOp
):
...
...
python/paddle/fluid/tests/unittests/test_dist_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -113,14 +113,13 @@ class TestDistOp(OpTest):
return
x_grad
,
y_grad
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
[
"X"
,
"Y"
],
"Out"
,
user_defined_grads
=
self
.
gradient
,
check_eager
=
True
,
)
...
...
python/paddle/fluid/tests/unittests/test_distribute_fpn_proposals_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -142,7 +142,7 @@ class TestDistributeFPNProposalsOp(OpTest):
self
.
set_data
()
def
test_check_output
(
self
):
self
.
check_output
()
self
.
check_output
(
check_dygraph
=
False
)
class
TestDistributeFPNProposalsOpWithRoisNum
(
TestDistributeFPNProposalsOp
):
...
...
python/paddle/fluid/tests/unittests/test_dpsgd_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestDpsgdOp
(
OpTest
):
...
...
@@ -43,7 +43,7 @@ class TestDpsgdOp(OpTest):
self
.
outputs
=
{
'ParamOut'
:
param_out
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
dpsgd_step
(
inputs
,
attributes
):
...
...
python/paddle/fluid/tests/unittests/test_dropout_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
convert_float_to_uint16
,
skip_check_grad_ci
from
eager_
op_test
import
OpTest
,
convert_float_to_uint16
,
skip_check_grad_ci
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -25,9 +25,30 @@ from paddle import _C_ops
from
paddle.fluid
import
Program
,
program_guard
def
dropout_wapper
(
X
,
Seed
=
None
,
dropout_prob
=
0.5
,
is_test
=
False
,
dropout_implementation
=
"downgrade_in_infer"
,
seed
=
0
,
fix_seed
=
False
,
):
return
paddle
.
_C_ops
.
dropout
(
X
,
Seed
,
dropout_prob
,
is_test
,
dropout_implementation
,
seed
,
fix_seed
,
)
class
TestDropoutOp
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.0
,
'fix_seed'
:
True
,
'is_test'
:
False
}
self
.
outputs
=
{
...
...
@@ -45,6 +66,7 @@ class TestDropoutOp(OpTest):
class
TestDropoutOpInput1d
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
2000
,)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.0
,
'fix_seed'
:
True
,
'is_test'
:
False
}
self
.
outputs
=
{
...
...
@@ -62,6 +84,7 @@ class TestDropoutOpInput1d(OpTest):
class
TestDropoutOp2
(
TestDropoutOp
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
1.0
,
'fix_seed'
:
True
,
'is_test'
:
False
}
self
.
outputs
=
{
...
...
@@ -73,6 +96,7 @@ class TestDropoutOp2(TestDropoutOp):
class
TestDropoutOp3
(
TestDropoutOp
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
,
2
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.0
,
'fix_seed'
:
True
,
'is_test'
:
False
}
self
.
outputs
=
{
...
...
@@ -85,6 +109,7 @@ class TestDropoutOp3(TestDropoutOp):
class
TestDropoutOp4
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.35
,
'fix_seed'
:
True
,
'is_test'
:
True
}
self
.
outputs
=
{
...
...
@@ -99,6 +124,7 @@ class TestDropoutOp4(OpTest):
class
TestDropoutOp5
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
,
3
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.75
,
'is_test'
:
True
}
self
.
outputs
=
{
...
...
@@ -112,6 +138,7 @@ class TestDropoutOp5(OpTest):
class
TestDropoutOp6
(
TestDropoutOp
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
1.0
,
...
...
@@ -128,6 +155,7 @@ class TestDropoutOp6(TestDropoutOp):
class
TestDropoutOp7
(
TestDropoutOp
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
,
2
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.0
,
...
...
@@ -145,6 +173,7 @@ class TestDropoutOp7(TestDropoutOp):
class
TestDropoutOp8
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.35
,
...
...
@@ -162,6 +191,7 @@ class TestDropoutOp8(OpTest):
class
TestDropoutOp9
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
'X'
:
np
.
random
.
random
((
32
,
64
,
3
)).
astype
(
"float32"
)}
self
.
attrs
=
{
'dropout_prob'
:
0.75
,
...
...
@@ -177,6 +207,7 @@ class TestDropoutOp9(OpTest):
class
TestDropoutOpWithSeed
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
inputs
=
{
"X"
:
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
),
"Seed"
:
np
.
asarray
([
125
],
dtype
=
"int32"
),
...
...
@@ -204,6 +235,7 @@ class TestDropoutOpWithSeed(OpTest):
class
TestFP16DropoutOp
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
init_test_case
()
x
=
np
.
random
.
random
(
self
.
input_size
).
astype
(
"float16"
)
...
...
@@ -240,6 +272,7 @@ class TestFP16DropoutOp2(TestFP16DropoutOp):
class
TestBF16DropoutOp
(
OpTest
):
def
setUp
(
self
):
self
.
op_type
=
"dropout"
self
.
python_api
=
dropout_wapper
self
.
dtype
=
np
.
uint16
x
=
np
.
random
.
random
((
32
,
64
)).
astype
(
"float32"
)
...
...
python/paddle/fluid/tests/unittests/test_edit_distance_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -103,7 +103,7 @@ class TestEditDistanceOp(OpTest):
self
.
outputs
=
{
'Out'
:
distance
,
'SequenceNum'
:
sequence_num
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
TestEditDistanceOpNormalizedCase0
(
OpTest
):
...
...
@@ -153,7 +153,7 @@ class TestEditDistanceOpNormalizedCase0(OpTest):
self
.
post_config
()
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
TestEditDistanceOpNormalizedCase1
(
TestEditDistanceOpNormalizedCase0
):
...
...
@@ -205,7 +205,7 @@ class TestEditDistanceOpNormalizedTensor(OpTest):
self
.
outputs
=
{
'Out'
:
distance
,
'SequenceNum'
:
sequence_num
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
if
__name__
==
'__main__'
:
...
...
python/paddle/fluid/tests/unittests/test_eig_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
skip_check_grad_ci
from
eager_
op_test
import
OpTest
,
skip_check_grad_ci
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -63,6 +63,7 @@ class TestEigOp(OpTest):
paddle
.
enable_static
()
paddle
.
device
.
set_device
(
"cpu"
)
self
.
op_type
=
"eig"
self
.
python_api
=
paddle
.
linalg
.
eig
self
.
__class__
.
op_type
=
self
.
op_type
self
.
init_input
()
self
.
inputs
=
{
'X'
:
OpTest
.
np_dtype_to_fluid_dtype
(
self
.
x
)}
...
...
python/paddle/fluid/tests/unittests/test_eigh_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -71,6 +71,7 @@ class TestEighOp(OpTest):
def
setUp
(
self
):
paddle
.
enable_static
()
self
.
op_type
=
"eigh"
self
.
python_api
=
paddle
.
linalg
.
eigh
self
.
init_input
()
self
.
init_config
()
np
.
random
.
seed
(
123
)
...
...
@@ -87,8 +88,8 @@ class TestEighOp(OpTest):
self
.
x_type
=
np
.
float64
self
.
x_np
=
np
.
random
.
random
(
self
.
x_shape
).
astype
(
self
.
x_type
)
def
test_check_output
(
self
):
self
.
check_output
(
no_check_set
=
[
'Eigenvectors'
])
#
def test_check_output(self):
#
self.check_output(no_check_set=['Eigenvectors'])
def
test_grad
(
self
):
self
.
check_grad
([
"X"
],
[
"Eigenvalues"
])
...
...
python/paddle/fluid/tests/unittests/test_eigvals_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
python/paddle/fluid/tests/unittests/test_eigvalsh_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -73,10 +73,10 @@ class TestEigvalshOp(OpTest):
def
test_check_output
(
self
):
# Vectors in posetive or negative is equivalent
self
.
check_output
(
no_check_set
=
[
'Eigenvectors'
]
,
check_eager
=
True
)
self
.
check_output
(
no_check_set
=
[
'Eigenvectors'
])
def
test_grad
(
self
):
self
.
check_grad
([
"X"
],
[
"Eigenvalues"
]
,
check_eager
=
True
)
self
.
check_grad
([
"X"
],
[
"Eigenvalues"
])
class
TestEigvalshUPLOCase
(
TestEigvalshOp
):
...
...
python/paddle/fluid/tests/unittests/test_elementwise_floordiv_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import random
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_op_test
import
OpTest
,
paddle_static_guard
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -44,7 +44,7 @@ class TestElementwiseModOp(OpTest):
self
.
outputs
=
{
'Out'
:
self
.
out
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
init_input_output
(
self
):
self
.
x
=
np
.
random
.
uniform
(
0
,
10000
,
[
10
,
10
]).
astype
(
self
.
dtype
)
...
...
@@ -97,12 +97,13 @@ class TestElementwiseModOpInverse(TestElementwiseModOp):
class
TestFloorDivideOp
(
unittest
.
TestCase
):
def
test_name
(
self
):
with
fluid
.
program_guard
(
fluid
.
Program
()):
x
=
fluid
.
data
(
name
=
"x"
,
shape
=
[
2
,
3
],
dtype
=
"int64"
)
y
=
fluid
.
data
(
name
=
'y'
,
shape
=
[
2
,
3
],
dtype
=
'int64'
)
with
paddle_static_guard
():
with
fluid
.
program_guard
(
fluid
.
Program
()):
x
=
fluid
.
data
(
name
=
"x"
,
shape
=
[
2
,
3
],
dtype
=
"int64"
)
y
=
fluid
.
data
(
name
=
'y'
,
shape
=
[
2
,
3
],
dtype
=
'int64'
)
y_1
=
paddle
.
floor_divide
(
x
,
y
,
name
=
'div_res'
)
self
.
assertEqual
((
'div_res'
in
y_1
.
name
),
True
)
y_1
=
paddle
.
floor_divide
(
x
,
y
,
name
=
'div_res'
)
self
.
assertEqual
((
'div_res'
in
y_1
.
name
),
True
)
def
test_dygraph
(
self
):
with
fluid
.
dygraph
.
guard
():
...
...
python/paddle/fluid/tests/unittests/test_elementwise_heaviside_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -37,16 +37,16 @@ class TestElementwiseOp(OpTest):
self
.
outputs
=
{
'Out'
:
np
.
heaviside
(
self
.
inputs
[
'X'
],
self
.
inputs
[
'Y'
])}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
def
test_check_grad_ingore_x
(
self
):
self
.
check_grad
([
'Y'
],
'Out'
,
no_grad_set
=
set
(
"X"
)
,
check_eager
=
True
)
self
.
check_grad
([
'Y'
],
'Out'
,
no_grad_set
=
set
(
"X"
))
def
test_check_grad_ingore_y
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
no_grad_set
=
set
(
'Y'
)
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
,
no_grad_set
=
set
(
'Y'
))
class
TestHeavisideBroadcast
(
unittest
.
TestCase
):
...
...
@@ -182,7 +182,6 @@ class TestHeavisideAPI_float16(OpTest):
user_defined_grads
=
Heaviside_grad
(
self
.
inputs
[
'X'
],
self
.
inputs
[
'Y'
],
1
/
self
.
inputs
[
'X'
].
size
),
check_eager
=
True
,
)
...
...
python/paddle/fluid/tests/unittests/test_elementwise_mod_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import random
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -44,9 +44,9 @@ class TestElementwiseModOp(OpTest):
def
test_check_output
(
self
):
if
self
.
attrs
[
'axis'
]
==
-
1
:
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
else
:
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
def
init_input_output
(
self
):
self
.
x
=
np
.
random
.
uniform
(
0
,
10000
,
[
10
,
10
]).
astype
(
self
.
dtype
)
...
...
@@ -101,9 +101,9 @@ class TestElementwiseModOpFloat(TestElementwiseModOp):
def
test_check_output
(
self
):
if
self
.
attrs
[
'axis'
]
==
-
1
:
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
else
:
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
class
TestElementwiseModOpFp16
(
TestElementwiseModOp
):
...
...
@@ -117,9 +117,9 @@ class TestElementwiseModOpFp16(TestElementwiseModOp):
def
test_check_output
(
self
):
if
self
.
attrs
[
'axis'
]
==
-
1
:
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
else
:
self
.
check_output
(
check_eager
=
False
)
self
.
check_output
()
class
TestElementwiseModOpDouble
(
TestElementwiseModOpFloat
):
...
...
python/paddle/fluid/tests/unittests/test_empty_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
python/paddle/fluid/tests/unittests/test_erfinv_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
scipy.special
import
erfinv
import
paddle
...
...
@@ -44,7 +44,7 @@ class TestErfinv(OpTest):
self
.
dtype
=
np
.
float64
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
(
...
...
python/paddle/fluid/tests/unittests/test_executor_return_tensor_not_overwriting.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,11 +15,13 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
skip_check_grad_ci
from
eager_
op_test
import
OpTest
,
skip_check_grad_ci
import
paddle
import
paddle.fluid
as
fluid
paddle
.
enable_static
()
@
skip_check_grad_ci
(
reason
=
"Not op test but call the method of class OpTest."
)
class
TestExecutorReturnTensorNotOverwritingWithOptest
(
OpTest
):
...
...
@@ -104,4 +106,5 @@ class TestExecutorReturnTensorNotOverOverwritingWithLayers(unittest.TestCase):
if
__name__
==
'__main__'
:
paddle
.
enable_static
()
unittest
.
main
()
python/paddle/fluid/tests/unittests/test_expand_as_v2_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -35,10 +35,10 @@ class TestExpandAsBasic(OpTest):
self
.
outputs
=
{
'Out'
:
output
}
def
test_check_output
(
self
):
self
.
check_output
(
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_output
(
check_prim
=
True
)
def
test_check_grad
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
'X'
],
'Out'
,
check_prim
=
True
)
class
TestExpandAsOpRank2
(
TestExpandAsBasic
):
...
...
python/paddle/fluid/tests/unittests/test_eye_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import os
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
test_attribute_var
import
UnittestBase
import
paddle
...
...
@@ -42,7 +42,7 @@ class TestEyeOp(OpTest):
self
.
outputs
=
{
'Out'
:
np
.
eye
(
219
,
319
,
dtype
=
np
.
int32
)}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
TestEyeOp1
(
OpTest
):
...
...
@@ -58,7 +58,7 @@ class TestEyeOp1(OpTest):
self
.
outputs
=
{
'Out'
:
np
.
eye
(
50
,
dtype
=
float
)}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
TestEyeOp2
(
OpTest
):
...
...
@@ -74,7 +74,7 @@ class TestEyeOp2(OpTest):
self
.
outputs
=
{
'Out'
:
np
.
eye
(
99
,
1
,
dtype
=
float
)}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
API_TestTensorEye
(
unittest
.
TestCase
):
...
...
python/paddle/fluid/tests/unittests/test_fake_dequantize_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import math
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
def
quantize_max_abs
(
x
,
max_range
):
...
...
python/paddle/fluid/tests/unittests/test_fc_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_op_test
import
OpTest
,
paddle_static_guard
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -144,27 +144,28 @@ class TestFcOp_NumFlattenDims_NegOne(unittest.TestCase):
startup_program
=
Program
()
main_program
=
Program
()
with
program_guard
(
main_program
,
startup_program
):
input
=
np
.
random
.
random
([
2
,
2
,
25
]).
astype
(
"float32"
)
x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
2
,
2
,
25
],
dtype
=
"float32"
,
with
paddle_static_guard
():
with
program_guard
(
main_program
,
startup_program
):
input
=
np
.
random
.
random
([
2
,
2
,
25
]).
astype
(
"float32"
)
x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
2
,
2
,
25
],
dtype
=
"float32"
,
)
out
=
paddle
.
static
.
nn
.
fc
(
x
=
x
,
size
=
1
,
num_flatten_dims
=
num_flatten_dims
)
place
=
(
fluid
.
CPUPlace
()
if
not
core
.
is_compiled_with_cuda
()
else
fluid
.
CUDAPlace
(
0
)
)
out
=
paddle
.
static
.
nn
.
fc
(
x
=
x
,
size
=
1
,
num_flatten_dims
=
num_flatten_dims
)
place
=
(
fluid
.
CPUPlace
()
if
not
core
.
is_compiled_with_cuda
()
else
fluid
.
CUDAPlace
(
0
)
)
exe
=
fluid
.
Executor
(
place
=
place
)
exe
.
run
(
startup_program
)
out
=
exe
.
run
(
main_program
,
feed
=
{
"x"
:
input
},
fetch_list
=
[
out
])
return
out
exe
=
fluid
.
Executor
(
place
=
place
)
exe
.
run
(
startup_program
)
out
=
exe
.
run
(
main_program
,
feed
=
{
"x"
:
input
},
fetch_list
=
[
out
])
return
out
res_1
=
run_program
(
-
1
)
res_2
=
run_program
(
2
)
...
...
@@ -177,27 +178,35 @@ class TestFCOpError(unittest.TestCase):
input_data
=
np
.
random
.
random
((
2
,
4
)).
astype
(
"float32"
)
def
test_Variable
():
# the input type must be Variable
paddle
.
static
.
nn
.
fc
(
x
=
input_data
,
size
=
1
)
with
paddle_static_guard
():
# the input type must be Variable
paddle
.
static
.
nn
.
fc
(
x
=
input_data
,
size
=
1
)
self
.
assertRaises
(
TypeError
,
test_Variable
)
def
test_input_list
():
# each of input(list) must be Variable
paddle
.
static
.
nn
.
fc
(
x
=
[
input_data
],
size
=
1
)
with
paddle_static_guard
():
# each of input(list) must be Variable
paddle
.
static
.
nn
.
fc
(
x
=
[
input_data
],
size
=
1
)
self
.
assertRaises
(
TypeError
,
test_input_list
)
def
test_type
():
# dtype must be float32 or float64
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
-
1
,
4
],
dtype
=
'int32'
)
paddle
.
static
.
nn
.
fc
(
x
=
x2
,
size
=
1
)
with
paddle_static_guard
():
# dtype must be float32 or float64
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
-
1
,
4
],
dtype
=
'int32'
)
paddle
.
static
.
nn
.
fc
(
x
=
x2
,
size
=
1
)
self
.
assertRaises
(
TypeError
,
test_type
)
# The input dtype of fc can be float16 in GPU, test for warning
x3
=
paddle
.
static
.
data
(
name
=
'x3'
,
shape
=
[
-
1
,
4
],
dtype
=
'float16'
)
paddle
.
static
.
nn
.
fc
(
x
=
x3
,
size
=
1
)
with
paddle_static_guard
():
# The input dtype of fc can be float16 in GPU, test for warning
x3
=
paddle
.
static
.
data
(
name
=
'x3'
,
shape
=
[
-
1
,
4
],
dtype
=
'float16'
)
paddle
.
static
.
nn
.
fc
(
x
=
x3
,
size
=
1
)
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_fill_constant_batch_size_like.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
from
paddle.fluid.framework
import
convert_np_dtype_to_dtype_
...
...
@@ -67,7 +67,7 @@ class TestFillConstatnBatchSizeLike1(OpTest):
self
.
force_cpu
=
False
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_fill_diagonal_tensor_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -102,10 +102,10 @@ class TensorFillDiagTensor_Test(OpTest):
self
.
dtype
=
np
.
float64
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
class
TensorFillDiagTensor_Test2
(
TensorFillDiagTensor_Test
):
...
...
python/paddle/fluid/tests/unittests/test_fill_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle.fluid.core
as
core
from
paddle.fluid.op
import
Operator
...
...
python/paddle/fluid/tests/unittests/test_fill_zeros_like2_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
paddle.fluid.framework
import
convert_np_dtype_to_dtype_
...
...
python/paddle/fluid/tests/unittests/test_fill_zeros_like_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestFillZerosLikeOp
(
OpTest
):
...
...
python/paddle/fluid/tests/unittests/test_filter_by_instag_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
"""This is Test Case 1"""
...
...
python/paddle/fluid/tests/unittests/test_flatten2_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
class
TestFlattenOp
(
OpTest
):
...
...
python/paddle/fluid/tests/unittests/test_flatten_contiguous_range_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -41,12 +41,10 @@ class TestFlattenOp(OpTest):
self
.
enable_cinn
=
True
def
test_check_output
(
self
):
self
.
check_output
(
no_check_set
=
[
"XShape"
],
check_eager
=
True
,
check_prim
=
True
)
self
.
check_output
(
no_check_set
=
[
"XShape"
],
check_prim
=
True
)
def
test_check_grad
(
self
):
self
.
check_grad
([
"X"
],
"Out"
,
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_grad
([
"X"
],
"Out"
,
check_prim
=
True
)
def
init_test_case
(
self
):
self
.
in_shape
=
(
3
,
2
,
5
,
4
)
...
...
python/paddle/fluid/tests/unittests/test_flatten_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_op_test
import
OpTest
,
paddle_static_guard
import
paddle
...
...
@@ -69,27 +69,28 @@ class TestFlattenOpSixDims(TestFlattenOp):
class
TestFlattenOpFP16
(
unittest
.
TestCase
):
def
test_fp16_with_gpu
(
self
):
if
paddle
.
fluid
.
core
.
is_compiled_with_cuda
():
place
=
paddle
.
CUDAPlace
(
0
)
with
paddle
.
static
.
program_guard
(
paddle
.
static
.
Program
(),
paddle
.
static
.
Program
()
):
input
=
np
.
random
.
random
([
12
,
14
]).
astype
(
"float16"
)
x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
12
,
14
],
dtype
=
"float16"
)
y
=
paddle
.
flatten
(
x
)
exe
=
paddle
.
static
.
Executor
(
place
)
res
=
exe
.
run
(
paddle
.
static
.
default_main_program
(),
feed
=
{
"x"
:
input
,
},
fetch_list
=
[
y
],
)
assert
np
.
array_equal
(
res
[
0
].
shape
,
[
12
*
14
])
with
paddle_static_guard
():
place
=
paddle
.
CUDAPlace
(
0
)
with
paddle
.
static
.
program_guard
(
paddle
.
static
.
Program
(),
paddle
.
static
.
Program
()
):
input
=
np
.
random
.
random
([
12
,
14
]).
astype
(
"float16"
)
x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
12
,
14
],
dtype
=
"float16"
)
y
=
paddle
.
flatten
(
x
)
exe
=
paddle
.
static
.
Executor
(
place
)
res
=
exe
.
run
(
paddle
.
static
.
default_main_program
(),
feed
=
{
"x"
:
input
,
},
fetch_list
=
[
y
],
)
assert
np
.
array_equal
(
res
[
0
].
shape
,
[
12
*
14
])
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_flip.py
浏览文件 @
f1a3f4f7
...
...
@@ -17,7 +17,7 @@ import unittest
import
gradient_checker
import
numpy
as
np
from
decorator_helper
import
prog_scope
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -80,10 +80,10 @@ class TestFlipOp(OpTest):
self
.
attrs
=
{
"axis"
:
self
.
axis
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
"X"
],
"Out"
,
check_eager
=
True
)
self
.
check_grad
([
"X"
],
"Out"
)
def
init_test_case
(
self
):
self
.
in_shape
=
(
6
,
4
,
2
,
3
)
...
...
python/paddle/fluid/tests/unittests/test_fmax_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -145,11 +145,11 @@ class TestElementwiseFmaxOp(OpTest):
def
test_check_output
(
self
):
"""test_check_output"""
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
"""test_check_grad_normal"""
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
def
test_check_grad_ingore_x
(
self
):
"""test_check_grad_ingore_x"""
...
...
@@ -158,7 +158,6 @@ class TestElementwiseFmaxOp(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
"X"
),
check_eager
=
True
,
)
def
test_check_grad_ingore_y
(
self
):
...
...
@@ -168,7 +167,6 @@ class TestElementwiseFmaxOp(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
'Y'
),
check_eager
=
True
,
)
...
...
@@ -192,11 +190,11 @@ class TestElementwiseFmax2Op(OpTest):
def
test_check_output
(
self
):
"""test_check_output"""
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
"""test_check_grad_normal"""
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
def
test_check_grad_ingore_x
(
self
):
"""test_check_grad_ingore_x"""
...
...
@@ -205,7 +203,6 @@ class TestElementwiseFmax2Op(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
"X"
),
check_eager
=
True
,
)
def
test_check_grad_ingore_y
(
self
):
...
...
@@ -215,7 +212,6 @@ class TestElementwiseFmax2Op(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
'Y'
),
check_eager
=
True
,
)
...
...
@@ -238,11 +234,11 @@ class TestElementwiseFmax3Op(OpTest):
def
test_check_output
(
self
):
"""test_check_output"""
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
"""test_check_grad_normal"""
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_fmin_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -147,11 +147,11 @@ class TestElementwiseFminOp(OpTest):
def
test_check_output
(
self
):
"""test_check_output"""
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
"""test_check_grad_normal"""
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
def
test_check_grad_ingore_x
(
self
):
"""test_check_grad_ingore_x"""
...
...
@@ -160,7 +160,6 @@ class TestElementwiseFminOp(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
"X"
),
check_eager
=
True
,
)
def
test_check_grad_ingore_y
(
self
):
...
...
@@ -170,7 +169,6 @@ class TestElementwiseFminOp(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
'Y'
),
check_eager
=
True
,
)
...
...
@@ -194,11 +192,11 @@ class TestElementwiseFmin2Op(OpTest):
def
test_check_output
(
self
):
"""test_check_output"""
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
"""test_check_grad_normal"""
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
def
test_check_grad_ingore_x
(
self
):
"""test_check_grad_ingore_x"""
...
...
@@ -207,7 +205,6 @@ class TestElementwiseFmin2Op(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
"X"
),
check_eager
=
True
,
)
def
test_check_grad_ingore_y
(
self
):
...
...
@@ -217,7 +214,6 @@ class TestElementwiseFmin2Op(OpTest):
'Out'
,
max_relative_error
=
0.005
,
no_grad_set
=
set
(
'Y'
),
check_eager
=
True
,
)
...
...
@@ -240,11 +236,11 @@ class TestElementwiseFmin3Op(OpTest):
def
test_check_output
(
self
):
"""test_check_output"""
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad_normal
(
self
):
"""test_check_grad_normal"""
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
,
'Y'
],
'Out'
)
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_fold_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
...
...
@@ -124,10 +124,10 @@ class TestFoldOp(OpTest):
self
.
set_data
()
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
def
test_check_grad
(
self
):
self
.
check_grad
([
'X'
],
'Y'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Y'
)
class
TestFoldshape
(
TestFoldOp
):
...
...
python/paddle/fluid/tests/unittests/test_frame_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,8 +15,8 @@
import
unittest
import
numpy
as
np
from
eager_op_test
import
OpTest
from
numpy.lib.stride_tricks
import
as_strided
from
op_test
import
OpTest
import
paddle
...
...
@@ -68,12 +68,12 @@ class TestFrameOp(OpTest):
def
test_check_output
(
self
):
paddle
.
enable_static
()
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
paddle
.
disable_static
()
def
test_check_grad_normal
(
self
):
paddle
.
enable_static
()
self
.
check_grad
([
'X'
],
'Out'
,
check_eager
=
True
)
self
.
check_grad
([
'X'
],
'Out'
)
paddle
.
disable_static
()
...
...
python/paddle/fluid/tests/unittests/test_fsp_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
def
fsp_matrix
(
a
,
b
):
...
...
python/paddle/fluid/tests/unittests/test_ftrl_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle.fluid.core
as
core
from
paddle.fluid.op
import
Operator
...
...
@@ -116,7 +116,7 @@ class TestFTRLOp(OpTest):
}
def
test_check_output
(
self
):
self
.
check_output
(
check_eager
=
True
)
self
.
check_output
()
class
TestSparseFTRLOp
(
unittest
.
TestCase
):
...
...
python/paddle/fluid/tests/unittests/test_full_like_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -130,7 +130,7 @@ class TestFullLikeOp1(OpTest):
self
.
dtype
=
np
.
float32
def
test_check_output
(
self
):
self
.
check_output
(
check_
eager
=
True
,
check_
prim
=
True
)
self
.
check_output
(
check_prim
=
True
)
def
if_enable_cinn
(
self
):
pass
...
...
python/paddle/fluid/tests/unittests/test_fused_adam_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
...
...
@@ -172,7 +172,7 @@ class TestFusedAdamOp(OpTest):
def
test_check_output
(
self
):
paddle
.
enable_static
()
if
paddle
.
is_compiled_with_cuda
():
self
.
check_output
()
self
.
check_output
(
check_dygraph
=
False
)
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_fused_attention_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.incubate.nn.functional
as
incubate_f
...
...
python/paddle/fluid/tests/unittests/test_fused_bias_dropout_residual_layer_norm_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.incubate.nn.functional
as
incubate_f
...
...
python/paddle/fluid/tests/unittests/test_fused_ec_moe_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.nn.functional
as
F
...
...
python/paddle/fluid/tests/unittests/test_fused_elemwise_activation_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import unittest
from
functools
import
partial
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle.fluid.core
as
core
...
...
python/paddle/fluid/tests/unittests/test_fused_emb_seq_pool_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import platform
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
skip_check_grad_ci
from
eager_op_test
import
OpTest
,
paddle_static_guard
,
skip_check_grad_ci
import
paddle
import
paddle.version
as
ver
...
...
@@ -105,32 +105,33 @@ class TestLookupTableOpWithPadding(TestFusedEmbeddingSeqPoolOp):
class
TestFusedEmbeddingSeqPoolApi
(
unittest
.
TestCase
):
def
test_api
(
self
):
if
ver
.
mkl
()
==
"ON"
and
'Linux'
in
platform
.
platform
():
import
paddle.fluid
as
fluid
with
paddle_static_guard
():
if
ver
.
mkl
()
==
"ON"
and
'Linux'
in
platform
.
platform
():
import
paddle.fluid
as
fluid
dict_size
=
20
data_t
=
paddle
.
static
.
data
(
name
=
'word'
,
shape
=
[
-
1
,
1
],
dtype
=
'int64'
,
lod_level
=
1
)
padding_idx
=
np
.
random
.
randint
(
1
,
10
)
out
=
fluid
.
contrib
.
fused_embedding_seq_pool
(
input
=
data_t
,
size
=
[
dict_size
,
32
],
param_attr
=
'w'
,
padding_idx
=
padding_idx
,
is_sparse
=
False
,
)
dict_size
=
20
data_t
=
paddle
.
static
.
data
(
name
=
'word'
,
shape
=
[
-
1
,
1
],
dtype
=
'int64'
,
lod_level
=
1
)
padding_idx
=
np
.
random
.
randint
(
1
,
10
)
out
=
fluid
.
contrib
.
fused_embedding_seq_pool
(
input
=
data_t
,
size
=
[
dict_size
,
32
],
param_attr
=
'w'
,
padding_idx
=
padding_idx
,
is_sparse
=
False
,
)
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
.
run
(
fluid
.
default_startup_program
())
# prepare input words' idx
x_tensor
=
fluid
.
core
.
LoDTensor
()
idxs
=
np
.
random
.
randint
(
1
,
10
,
(
8
)).
astype
(
"int64"
)
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
.
run
(
fluid
.
default_startup_program
())
# prepare input words' idx
x_tensor
=
fluid
.
core
.
LoDTensor
()
idxs
=
np
.
random
.
randint
(
1
,
10
,
(
8
)).
astype
(
"int64"
)
x_tensor
.
set
(
idxs
,
place
)
x_tensor
.
set_recursive_sequence_lengths
([[
4
,
4
]])
ret
=
exe
.
run
(
feed
=
{
'word'
:
x_tensor
},
fetch_list
=
[
out
])
x_tensor
.
set
(
idxs
,
place
)
x_tensor
.
set_recursive_sequence_lengths
([[
4
,
4
]])
ret
=
exe
.
run
(
feed
=
{
'word'
:
x_tensor
},
fetch_list
=
[
out
])
if
__name__
==
"__main__"
:
...
...
python/paddle/fluid/tests/unittests/test_fused_embedding_fc_lstm_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
test_lstm_op
import
ACTIVATION
,
lstm
...
...
python/paddle/fluid/tests/unittests/test_fused_fc_elementwise_layernorm_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
test_fc_op
import
MatrixGenerate
,
fc_refer
from
test_layer_norm_op
import
_reference_layer_norm_naive
...
...
@@ -70,7 +70,7 @@ class TestFusedFCElementwiseLayerNormOp(OpTest):
def
test_check_output
(
self
):
place
=
core
.
CUDAPlace
(
0
)
self
.
check_output_with_place
(
place
,
atol
=
2e-3
)
self
.
check_output_with_place
(
place
,
atol
=
2e-3
,
check_dygraph
=
False
)
class
TestFusedFCElementwiseLayerNormOp2
(
TestFusedFCElementwiseLayerNormOp
):
...
...
python/paddle/fluid/tests/unittests/test_fused_feedforward_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -14,7 +14,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.incubate.nn.functional
as
incubate_f
...
...
python/paddle/fluid/tests/unittests/test_fused_gate_attention_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -20,7 +20,11 @@ os.environ['FLAGS_new_einsum'] = "0"
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
convert_float_to_uint16
,
convert_uint16_to_float
from
eager_op_test
import
(
OpTest
,
convert_float_to_uint16
,
convert_uint16_to_float
,
)
from
test_sparse_attention_op
import
get_cuda_version
import
paddle
...
...
python/paddle/fluid/tests/unittests/test_fused_gemm_epilogue_grad_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
skip_check_grad_ci
from
eager_
op_test
import
OpTest
,
skip_check_grad_ci
import
paddle
import
paddle.fluid.core
as
core
...
...
@@ -62,7 +62,9 @@ class TestFuseGemmEpilogueGradOpDXYBiasFP16(OpTest):
self
.
place
):
return
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
)
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
,
check_dygraph
=
False
)
@
skip_check_grad_ci
(
reason
=
"no grap op"
)
...
...
@@ -121,7 +123,9 @@ class TestFuseGemmEpilogueGradOpDYBiasFP16(OpTest):
self
.
place
):
return
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
)
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
,
check_dygraph
=
False
)
@
skip_check_grad_ci
(
reason
=
"no grap op"
)
...
...
@@ -180,7 +184,9 @@ class TestFuseGemmEpilogueGradOpDYFP16(OpTest):
self
.
place
):
return
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
)
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
,
check_dygraph
=
False
)
@
skip_check_grad_ci
(
reason
=
"no grap op"
)
...
...
@@ -235,7 +241,9 @@ class TestFuseGemmEpilogueGradOpDXYFP16(OpTest):
self
.
place
):
return
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
)
self
.
check_output_with_place
(
self
.
place
,
atol
=
self
.
atol
,
check_dygraph
=
False
)
@
skip_check_grad_ci
(
reason
=
"no grap op"
)
...
...
python/paddle/fluid/tests/unittests/test_fused_gemm_epilogue_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
,
skip_check_grad_ci
,
skip_check_inplace_ci
from
eager_
op_test
import
OpTest
,
skip_check_grad_ci
,
skip_check_inplace_ci
import
paddle
import
paddle.fluid.core
as
core
...
...
python/paddle/fluid/tests/unittests/test_fused_multi_transformer_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -16,7 +16,7 @@ import random
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
import
paddle
import
paddle.nn.functional
as
F
...
...
python/paddle/fluid/tests/unittests/test_fused_multihead_matmul_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
paddle.fluid
import
core
...
...
@@ -135,7 +135,7 @@ class TestFusedMultiHeadMatmulOp_biasqk2(OpTest):
def
test_check_output
(
self
):
place
=
core
.
CUDAPlace
(
0
)
self
.
check_output_with_place
(
place
,
atol
=
2e-3
)
self
.
check_output_with_place
(
place
,
atol
=
2e-3
,
check_dygraph
=
False
)
@
unittest
.
skipIf
(
...
...
@@ -239,7 +239,7 @@ class TestFusedMultiheadMatmulOp(OpTest):
def
test_check_output
(
self
):
place
=
core
.
CUDAPlace
(
0
)
self
.
check_output_with_place
(
place
,
atol
=
2e-3
)
self
.
check_output_with_place
(
place
,
atol
=
2e-3
,
check_dygraph
=
False
)
class
TestFusedMultiHeadMatmulOp2
(
TestFusedMultiheadMatmulOp
):
...
...
python/paddle/fluid/tests/unittests/test_fused_token_prune_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
paddle.framework
import
core
...
...
python/paddle/fluid/tests/unittests/test_fusion_gru_op.py
浏览文件 @
f1a3f4f7
...
...
@@ -15,7 +15,7 @@
import
unittest
import
numpy
as
np
from
op_test
import
OpTest
from
eager_
op_test
import
OpTest
from
paddle.fluid.tests.unittests.test_fusion_lstm_op
import
ACTIVATION
,
fc
from
paddle.fluid.tests.unittests.test_gru_op
import
gru
...
...
python/paddle/fluid/tests/unittests/test_fusion_lstm_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_repeated_fc_relu_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_seqconv_eltadd_relu_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_seqexpand_concat_fc_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_seqpool_concat_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_seqpool_cvm_concat_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_squared_mat_sub_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_fusion_transpose_flatten_concat_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_nearest_interp_op.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
python/paddle/vision/ops.py
浏览文件 @
f1a3f4f7
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录