Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
b85af464
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 2 年 前同步成功
通知
2325
Star
20933
Fork
5424
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
You need to sign in or sign up before continuing.
未验证
提交
b85af464
编写于
2月 14, 2023
作者:
M
mhy-666
提交者:
GitHub
2月 14, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
remove layers.tensor.argmin/argmax/assign/cast/concat/sums (#49944)
上级
61a933ac
变更
144
展开全部
隐藏空白更改
内联
并排
Showing
144 changed file
with
381 addition
and
976 deletion
+381
-976
python/paddle/distribution/categorical.py
python/paddle/distribution/categorical.py
+1
-1
python/paddle/distribution/distribution.py
python/paddle/distribution/distribution.py
+2
-2
python/paddle/distribution/normal.py
python/paddle/distribution/normal.py
+2
-2
python/paddle/distribution/uniform.py
python/paddle/distribution/uniform.py
+6
-6
python/paddle/fluid/contrib/extend_optimizer/extend_optimizer_with_weight_decay.py
...ib/extend_optimizer/extend_optimizer_with_weight_decay.py
+1
-1
python/paddle/fluid/contrib/tests/test_multi_precision_fp16_train.py
...le/fluid/contrib/tests/test_multi_precision_fp16_train.py
+2
-2
python/paddle/fluid/contrib/tests/test_weight_decay_extend.py
...on/paddle/fluid/contrib/tests/test_weight_decay_extend.py
+1
-1
python/paddle/fluid/dygraph/parallel.py
python/paddle/fluid/dygraph/parallel.py
+1
-1
python/paddle/fluid/framework.py
python/paddle/fluid/framework.py
+2
-2
python/paddle/fluid/incubate/fleet/tests/fleet_deep_ctr.py
python/paddle/fluid/incubate/fleet/tests/fleet_deep_ctr.py
+1
-1
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+5
-5
python/paddle/fluid/layers/learning_rate_scheduler.py
python/paddle/fluid/layers/learning_rate_scheduler.py
+4
-4
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+1
-1
python/paddle/fluid/layers/tensor.py
python/paddle/fluid/layers/tensor.py
+2
-576
python/paddle/fluid/layers/utils.py
python/paddle/fluid/layers/utils.py
+4
-3
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+15
-17
python/paddle/fluid/tests/book/test_recommender_system.py
python/paddle/fluid/tests/book/test_recommender_system.py
+4
-4
python/paddle/fluid/tests/book/test_word2vec_book.py
python/paddle/fluid/tests/book/test_word2vec_book.py
+2
-2
python/paddle/fluid/tests/unittests/auto_parallel/test_fp16_assign.py
...e/fluid/tests/unittests/auto_parallel/test_fp16_assign.py
+1
-1
python/paddle/fluid/tests/unittests/check_nan_inf_base.py
python/paddle/fluid/tests/unittests/check_nan_inf_base.py
+1
-1
python/paddle/fluid/tests/unittests/collective/collective_sendrecv_op_array.py
...ests/unittests/collective/collective_sendrecv_op_array.py
+4
-12
python/paddle/fluid/tests/unittests/collective/fleet/hybrid_parallel_inference_helper.py
...ests/collective/fleet/hybrid_parallel_inference_helper.py
+4
-4
python/paddle/fluid/tests/unittests/dist_ctr.py
python/paddle/fluid/tests/unittests/dist_ctr.py
+1
-1
python/paddle/fluid/tests/unittests/dist_fleet_ctr.py
python/paddle/fluid/tests/unittests/dist_fleet_ctr.py
+1
-1
python/paddle/fluid/tests/unittests/dist_fleet_heter_pipeline_ctr.py
...le/fluid/tests/unittests/dist_fleet_heter_pipeline_ctr.py
+2
-2
python/paddle/fluid/tests/unittests/dist_fleet_simnet_bow.py
python/paddle/fluid/tests/unittests/dist_fleet_simnet_bow.py
+1
-1
python/paddle/fluid/tests/unittests/dist_fleet_sparse_embedding_ctr.py
.../fluid/tests/unittests/dist_fleet_sparse_embedding_ctr.py
+2
-1
python/paddle/fluid/tests/unittests/dist_transformer.py
python/paddle/fluid/tests/unittests/dist_transformer.py
+6
-6
python/paddle/fluid/tests/unittests/dist_word2vec.py
python/paddle/fluid/tests/unittests/dist_word2vec.py
+2
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/bert_dygraph_model.py
...d/tests/unittests/dygraph_to_static/bert_dygraph_model.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/seq2seq_dygraph_model.py
...ests/unittests/dygraph_to_static/seq2seq_dygraph_model.py
+6
-6
python/paddle/fluid/tests/unittests/dygraph_to_static/simnet_dygraph_model.py
...tests/unittests/dygraph_to_static/simnet_dygraph_model.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ast_util.py
.../fluid/tests/unittests/dygraph_to_static/test_ast_util.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_basic_api_transformation.py
...ttests/dygraph_to_static/test_basic_api_transformation.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_bmn.py
...addle/fluid/tests/unittests/dygraph_to_static/test_bmn.py
+16
-16
python/paddle/fluid/tests/unittests/dygraph_to_static/test_convert_call.py
...id/tests/unittests/dygraph_to_static/test_convert_call.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_lac.py
...addle/fluid/tests/unittests/dygraph_to_static/test_lac.py
+2
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/test_list.py
...ddle/fluid/tests/unittests/dygraph_to_static/test_list.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ptb_lm.py
...le/fluid/tests/unittests/dygraph_to_static/test_ptb_lm.py
+4
-4
python/paddle/fluid/tests/unittests/dygraph_to_static/test_reinforcement_learning.py
...nittests/dygraph_to_static/test_reinforcement_learning.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_sentiment.py
...fluid/tests/unittests/dygraph_to_static/test_sentiment.py
+2
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/transformer_dygraph_model.py
.../unittests/dygraph_to_static/transformer_dygraph_model.py
+3
-3
python/paddle/fluid/tests/unittests/dygraph_to_static/yolov3.py
.../paddle/fluid/tests/unittests/dygraph_to_static/yolov3.py
+4
-6
python/paddle/fluid/tests/unittests/fleet_heter_ps_training.py
...n/paddle/fluid/tests/unittests/fleet_heter_ps_training.py
+2
-2
python/paddle/fluid/tests/unittests/fleet_ps_training.py
python/paddle/fluid/tests/unittests/fleet_ps_training.py
+2
-2
python/paddle/fluid/tests/unittests/ipu/test_arg_max_op_ipu.py
...n/paddle/fluid/tests/unittests/ipu/test_arg_max_op_ipu.py
+1
-1
python/paddle/fluid/tests/unittests/ipu/test_arg_min_op_ipu.py
...n/paddle/fluid/tests/unittests/ipu/test_arg_min_op_ipu.py
+1
-1
python/paddle/fluid/tests/unittests/ipu/test_concat_op_ipu.py
...on/paddle/fluid/tests/unittests/ipu/test_concat_op_ipu.py
+1
-1
python/paddle/fluid/tests/unittests/ir/inference/test_trt_slice_plugin.py
...uid/tests/unittests/ir/inference/test_trt_slice_plugin.py
+2
-2
python/paddle/fluid/tests/unittests/ir/inference/test_trt_subgraph_pass.py
...id/tests/unittests/ir/inference/test_trt_subgraph_pass.py
+1
-1
python/paddle/fluid/tests/unittests/ir/inference/test_trt_transpose_flatten_concat_fuse_pass.py
.../inference/test_trt_transpose_flatten_concat_fuse_pass.py
+1
-1
python/paddle/fluid/tests/unittests/ir/test_ir_fusion_group_pass.py
...dle/fluid/tests/unittests/ir/test_ir_fusion_group_pass.py
+7
-7
python/paddle/fluid/tests/unittests/mlu/sync_batch_norm_op_mlu.py
...addle/fluid/tests/unittests/mlu/sync_batch_norm_op_mlu.py
+2
-2
python/paddle/fluid/tests/unittests/mlu/test_cast_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_cast_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_one_hot_v2_op_mlu.py
...addle/fluid/tests/unittests/mlu/test_one_hot_v2_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_where_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_where_op_mlu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_assign_value_op_npu.py
...dle/fluid/tests/unittests/npu/test_assign_value_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_concat_op_npu.py
...on/paddle/fluid/tests/unittests/npu/test_concat_op_npu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_one_hot_v2_op_npu.py
...addle/fluid/tests/unittests/npu/test_one_hot_v2_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_stack_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_stack_op_npu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_while_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_while_op_npu.py
+9
-9
python/paddle/fluid/tests/unittests/sequence/test_sequence_pad_op.py
...le/fluid/tests/unittests/sequence/test_sequence_pad_op.py
+7
-9
python/paddle/fluid/tests/unittests/standalone_executor/test_standalone_multiply_write.py
...sts/standalone_executor/test_standalone_multiply_write.py
+2
-2
python/paddle/fluid/tests/unittests/test_array_read_write_op.py
.../paddle/fluid/tests/unittests/test_array_read_write_op.py
+4
-6
python/paddle/fluid/tests/unittests/test_assign_op.py
python/paddle/fluid/tests/unittests/test_assign_op.py
+7
-7
python/paddle/fluid/tests/unittests/test_assign_value_op.py
python/paddle/fluid/tests/unittests/test_assign_value_op.py
+1
-2
python/paddle/fluid/tests/unittests/test_cast_op.py
python/paddle/fluid/tests/unittests/test_cast_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_communicator_geo.py
python/paddle/fluid/tests/unittests/test_communicator_geo.py
+2
-1
python/paddle/fluid/tests/unittests/test_compare_op.py
python/paddle/fluid/tests/unittests/test_compare_op.py
+6
-6
python/paddle/fluid/tests/unittests/test_compare_reduce_op.py
...on/paddle/fluid/tests/unittests/test_compare_reduce_op.py
+2
-3
python/paddle/fluid/tests/unittests/test_concat_op.py
python/paddle/fluid/tests/unittests/test_concat_op.py
+17
-14
python/paddle/fluid/tests/unittests/test_conditional_block.py
...on/paddle/fluid/tests/unittests/test_conditional_block.py
+1
-1
python/paddle/fluid/tests/unittests/test_dataset.py
python/paddle/fluid/tests/unittests/test_dataset.py
+2
-2
python/paddle/fluid/tests/unittests/test_dist_fleet_heter_program.py
...le/fluid/tests/unittests/test_dist_fleet_heter_program.py
+2
-2
python/paddle/fluid/tests/unittests/test_dist_fleet_minimize.py
.../paddle/fluid/tests/unittests/test_dist_fleet_minimize.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps11.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps11.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps12.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps12.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps13.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps13.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps2.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps2.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps3.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps3.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps4.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps4.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps5.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps5.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps6.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps6.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_sparse_embedding_ctr.py
...d/tests/unittests/test_dist_fleet_sparse_embedding_ctr.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_spmt.py
python/paddle/fluid/tests/unittests/test_dist_fleet_spmt.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_transpiler.py
python/paddle/fluid/tests/unittests/test_dist_transpiler.py
+1
-3
python/paddle/fluid/tests/unittests/test_dynamic_rnn_stop_gradient.py
...e/fluid/tests/unittests/test_dynamic_rnn_stop_gradient.py
+4
-4
python/paddle/fluid/tests/unittests/test_eager_deletion_padding_rnn.py
.../fluid/tests/unittests/test_eager_deletion_padding_rnn.py
+9
-9
python/paddle/fluid/tests/unittests/test_eager_deletion_recurrent_op.py
...fluid/tests/unittests/test_eager_deletion_recurrent_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_eager_deletion_while_op.py
...dle/fluid/tests/unittests/test_eager_deletion_while_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_embedding_id_stop_gradient.py
.../fluid/tests/unittests/test_embedding_id_stop_gradient.py
+1
-1
python/paddle/fluid/tests/unittests/test_fetch_var.py
python/paddle/fluid/tests/unittests/test_fetch_var.py
+1
-2
python/paddle/fluid/tests/unittests/test_fleet.py
python/paddle/fluid/tests/unittests/test_fleet.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_nocvm_1.py
python/paddle/fluid/tests/unittests/test_fleet_nocvm_1.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_rolemaker.py
python/paddle/fluid/tests/unittests/test_fleet_rolemaker.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_2.py
...on/paddle/fluid/tests/unittests/test_fleet_rolemaker_2.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_3.py
...on/paddle/fluid/tests/unittests/test_fleet_rolemaker_3.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_unitaccessor.py
...n/paddle/fluid/tests/unittests/test_fleet_unitaccessor.py
+1
-1
python/paddle/fluid/tests/unittests/test_imperative_auto_prune.py
...addle/fluid/tests/unittests/test_imperative_auto_prune.py
+7
-7
python/paddle/fluid/tests/unittests/test_imperative_deepcf.py
...on/paddle/fluid/tests/unittests/test_imperative_deepcf.py
+2
-4
python/paddle/fluid/tests/unittests/test_imperative_ocr_attention_model.py
...id/tests/unittests/test_imperative_ocr_attention_model.py
+3
-5
python/paddle/fluid/tests/unittests/test_imperative_ptb_rnn.py
...n/paddle/fluid/tests/unittests/test_imperative_ptb_rnn.py
+4
-4
python/paddle/fluid/tests/unittests/test_imperative_save_load_v2.py
...dle/fluid/tests/unittests/test_imperative_save_load_v2.py
+4
-4
python/paddle/fluid/tests/unittests/test_imperative_star_gan_with_gradient_penalty.py
...ittests/test_imperative_star_gan_with_gradient_penalty.py
+1
-1
python/paddle/fluid/tests/unittests/test_layers.py
python/paddle/fluid/tests/unittests/test_layers.py
+3
-3
python/paddle/fluid/tests/unittests/test_math_op_patch.py
python/paddle/fluid/tests/unittests/test_math_op_patch.py
+1
-1
python/paddle/fluid/tests/unittests/test_mix_precision_all_reduce_fuse.py
...uid/tests/unittests/test_mix_precision_all_reduce_fuse.py
+2
-2
python/paddle/fluid/tests/unittests/test_nn_functional_hot_op.py
...paddle/fluid/tests/unittests/test_nn_functional_hot_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_nonzero_api.py
python/paddle/fluid/tests/unittests/test_nonzero_api.py
+2
-2
python/paddle/fluid/tests/unittests/test_one_hot_v2_op.py
python/paddle/fluid/tests/unittests/test_one_hot_v2_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_optimizer_grad.py
python/paddle/fluid/tests/unittests/test_optimizer_grad.py
+1
-1
python/paddle/fluid/tests/unittests/test_recurrent_op.py
python/paddle/fluid/tests/unittests/test_recurrent_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_reduce_op.py
python/paddle/fluid/tests/unittests/test_reduce_op.py
+4
-4
python/paddle/fluid/tests/unittests/test_regularizer.py
python/paddle/fluid/tests/unittests/test_regularizer.py
+1
-1
python/paddle/fluid/tests/unittests/test_regularizer_api.py
python/paddle/fluid/tests/unittests/test_regularizer_api.py
+1
-1
python/paddle/fluid/tests/unittests/test_stack_op.py
python/paddle/fluid/tests/unittests/test_stack_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_static_save_load.py
python/paddle/fluid/tests/unittests/test_static_save_load.py
+5
-5
python/paddle/fluid/tests/unittests/test_sum_op.py
python/paddle/fluid/tests/unittests/test_sum_op.py
+6
-5
python/paddle/fluid/tests/unittests/test_switch.py
python/paddle/fluid/tests/unittests/test_switch.py
+6
-6
python/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
...n/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_tensor_array_to_tensor.py
...ddle/fluid/tests/unittests/test_tensor_array_to_tensor.py
+4
-4
python/paddle/fluid/tests/unittests/test_variable.py
python/paddle/fluid/tests/unittests/test_variable.py
+1
-1
python/paddle/fluid/tests/unittests/test_weight_decay.py
python/paddle/fluid/tests/unittests/test_weight_decay.py
+1
-1
python/paddle/fluid/tests/unittests/test_where_op.py
python/paddle/fluid/tests/unittests/test_where_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_while_op.py
python/paddle/fluid/tests/unittests/test_while_op.py
+5
-4
python/paddle/fluid/tests/unittests/xpu/test_assign_value_op_xpu.py
...dle/fluid/tests/unittests/xpu/test_assign_value_op_xpu.py
+1
-2
python/paddle/fluid/tests/unittests/xpu/test_cast_op_xpu.py
python/paddle/fluid/tests/unittests/xpu/test_cast_op_xpu.py
+1
-1
python/paddle/fluid/tests/unittests/xpu/test_one_hot_v2_op_xpu.py
...addle/fluid/tests/unittests/xpu/test_one_hot_v2_op_xpu.py
+1
-1
python/paddle/fluid/tests/unittests/xpu/test_sum_op_xpu.py
python/paddle/fluid/tests/unittests/xpu/test_sum_op_xpu.py
+6
-5
python/paddle/fluid/tests/unittests/xpu/test_while_op_xpu.py
python/paddle/fluid/tests/unittests/xpu/test_while_op_xpu.py
+3
-3
python/paddle/fluid/variable_index.py
python/paddle/fluid/variable_index.py
+3
-7
python/paddle/geometric/message_passing/utils.py
python/paddle/geometric/message_passing/utils.py
+1
-2
python/paddle/incubate/autograd/composite_rules.py
python/paddle/incubate/autograd/composite_rules.py
+8
-4
python/paddle/incubate/autograd/primitives.py
python/paddle/incubate/autograd/primitives.py
+4
-4
python/paddle/incubate/operators/graph_send_recv.py
python/paddle/incubate/operators/graph_send_recv.py
+2
-2
python/paddle/incubate/optimizer/modelaverage.py
python/paddle/incubate/optimizer/modelaverage.py
+5
-5
python/paddle/jit/dy2static/convert_operators.py
python/paddle/jit/dy2static/convert_operators.py
+5
-5
python/paddle/jit/dy2static/utils.py
python/paddle/jit/dy2static/utils.py
+1
-2
python/paddle/nn/clip.py
python/paddle/nn/clip.py
+6
-6
python/paddle/nn/functional/loss.py
python/paddle/nn/functional/loss.py
+1
-1
python/paddle/nn/layer/rnn.py
python/paddle/nn/layer/rnn.py
+1
-1
python/paddle/static/amp/decorator.py
python/paddle/static/amp/decorator.py
+1
-2
python/paddle/static/nn/control_flow.py
python/paddle/static/nn/control_flow.py
+2
-3
未找到文件。
python/paddle/distribution/categorical.py
浏览文件 @
b85af464
...
@@ -112,7 +112,7 @@ class Categorical(distribution.Distribution):
...
@@ -112,7 +112,7 @@ class Categorical(distribution.Distribution):
self
.
dtype
=
logits
.
dtype
self
.
dtype
=
logits
.
dtype
self
.
logits
=
self
.
_to_tensor
(
logits
)[
0
]
self
.
logits
=
self
.
_to_tensor
(
logits
)[
0
]
if
self
.
dtype
!=
convert_dtype
(
self
.
logits
.
dtype
):
if
self
.
dtype
!=
convert_dtype
(
self
.
logits
.
dtype
):
self
.
logits
=
tensor
.
cast
(
self
.
logits
,
dtype
=
self
.
dtype
)
self
.
logits
=
paddle
.
cast
(
self
.
logits
,
dtype
=
self
.
dtype
)
dist_sum
=
paddle
.
sum
(
self
.
logits
,
axis
=-
1
,
keepdim
=
True
)
dist_sum
=
paddle
.
sum
(
self
.
logits
,
axis
=-
1
,
keepdim
=
True
)
self
.
_prob
=
self
.
logits
/
dist_sum
self
.
_prob
=
self
.
logits
/
dist_sum
...
...
python/paddle/distribution/distribution.py
浏览文件 @
b85af464
...
@@ -200,7 +200,7 @@ class Distribution:
...
@@ -200,7 +200,7 @@ class Distribution:
for
arg
in
numpy_args
:
for
arg
in
numpy_args
:
arg_broadcasted
,
_
=
np
.
broadcast_arrays
(
arg
,
tmp
)
arg_broadcasted
,
_
=
np
.
broadcast_arrays
(
arg
,
tmp
)
arg_variable
=
paddle
.
tensor
.
create_tensor
(
dtype
=
dtype
)
arg_variable
=
paddle
.
tensor
.
create_tensor
(
dtype
=
dtype
)
tensor
.
assign
(
arg_broadcasted
,
arg_variable
)
paddle
.
assign
(
arg_broadcasted
,
arg_variable
)
variable_args
.
append
(
arg_variable
)
variable_args
.
append
(
arg_variable
)
return
tuple
(
variable_args
)
return
tuple
(
variable_args
)
...
@@ -235,7 +235,7 @@ class Distribution:
...
@@ -235,7 +235,7 @@ class Distribution:
warnings
.
warn
(
warnings
.
warn
(
"dtype of input 'value' needs to be the same as parameters of distribution class. dtype of 'value' will be converted."
"dtype of input 'value' needs to be the same as parameters of distribution class. dtype of 'value' will be converted."
)
)
return
tensor
.
cast
(
value
,
dtype
=
param
.
dtype
)
return
paddle
.
cast
(
value
,
dtype
=
param
.
dtype
)
return
value
return
value
def
_probs_to_logits
(
self
,
probs
,
is_binary
=
False
):
def
_probs_to_logits
(
self
,
probs
,
is_binary
=
False
):
...
...
python/paddle/distribution/normal.py
浏览文件 @
b85af464
...
@@ -132,8 +132,8 @@ class Normal(distribution.Distribution):
...
@@ -132,8 +132,8 @@ class Normal(distribution.Distribution):
# pylint: disable=unbalanced-tuple-unpacking
# pylint: disable=unbalanced-tuple-unpacking
self
.
loc
,
self
.
scale
=
self
.
_to_tensor
(
loc
,
scale
)
self
.
loc
,
self
.
scale
=
self
.
_to_tensor
(
loc
,
scale
)
if
self
.
dtype
!=
convert_dtype
(
self
.
loc
.
dtype
):
if
self
.
dtype
!=
convert_dtype
(
self
.
loc
.
dtype
):
self
.
loc
=
tensor
.
cast
(
self
.
loc
,
dtype
=
self
.
dtype
)
self
.
loc
=
paddle
.
cast
(
self
.
loc
,
dtype
=
self
.
dtype
)
self
.
scale
=
tensor
.
cast
(
self
.
scale
,
dtype
=
self
.
dtype
)
self
.
scale
=
paddle
.
cast
(
self
.
scale
,
dtype
=
self
.
dtype
)
super
().
__init__
(
self
.
loc
.
shape
)
super
().
__init__
(
self
.
loc
.
shape
)
@
property
@
property
...
...
python/paddle/distribution/uniform.py
浏览文件 @
b85af464
...
@@ -137,8 +137,8 @@ class Uniform(distribution.Distribution):
...
@@ -137,8 +137,8 @@ class Uniform(distribution.Distribution):
# pylint: disable=unbalanced-tuple-unpacking
# pylint: disable=unbalanced-tuple-unpacking
self
.
low
,
self
.
high
=
self
.
_to_tensor
(
low
,
high
)
self
.
low
,
self
.
high
=
self
.
_to_tensor
(
low
,
high
)
if
self
.
dtype
!=
convert_dtype
(
self
.
low
.
dtype
):
if
self
.
dtype
!=
convert_dtype
(
self
.
low
.
dtype
):
self
.
low
=
tensor
.
cast
(
self
.
low
,
dtype
=
self
.
dtype
)
self
.
low
=
paddle
.
cast
(
self
.
low
,
dtype
=
self
.
dtype
)
self
.
high
=
tensor
.
cast
(
self
.
high
,
dtype
=
self
.
dtype
)
self
.
high
=
paddle
.
cast
(
self
.
high
,
dtype
=
self
.
dtype
)
super
().
__init__
(
self
.
low
.
shape
)
super
().
__init__
(
self
.
low
.
shape
)
...
@@ -218,8 +218,8 @@ class Uniform(distribution.Distribution):
...
@@ -218,8 +218,8 @@ class Uniform(distribution.Distribution):
name
=
self
.
name
+
'_log_prob'
name
=
self
.
name
+
'_log_prob'
lb_bool
=
self
.
low
<
value
lb_bool
=
self
.
low
<
value
ub_bool
=
value
<
self
.
high
ub_bool
=
value
<
self
.
high
lb
=
tensor
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
lb
=
paddle
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
ub
=
tensor
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
ub
=
paddle
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
return
paddle
.
subtract
(
return
paddle
.
subtract
(
paddle
.
log
(
lb
*
ub
),
paddle
.
log
(
self
.
high
-
self
.
low
),
name
=
name
paddle
.
log
(
lb
*
ub
),
paddle
.
log
(
self
.
high
-
self
.
low
),
name
=
name
)
)
...
@@ -245,8 +245,8 @@ class Uniform(distribution.Distribution):
...
@@ -245,8 +245,8 @@ class Uniform(distribution.Distribution):
name
=
self
.
name
+
'_probs'
name
=
self
.
name
+
'_probs'
lb_bool
=
self
.
low
<
value
lb_bool
=
self
.
low
<
value
ub_bool
=
value
<
self
.
high
ub_bool
=
value
<
self
.
high
lb
=
tensor
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
lb
=
paddle
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
ub
=
tensor
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
ub
=
paddle
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
return
paddle
.
divide
((
lb
*
ub
),
(
self
.
high
-
self
.
low
),
name
=
name
)
return
paddle
.
divide
((
lb
*
ub
),
(
self
.
high
-
self
.
low
),
name
=
name
)
def
entropy
(
self
):
def
entropy
(
self
):
...
...
python/paddle/fluid/contrib/extend_optimizer/extend_optimizer_with_weight_decay.py
浏览文件 @
b85af464
...
@@ -96,7 +96,7 @@ class DecoupledWeightDecay:
...
@@ -96,7 +96,7 @@ class DecoupledWeightDecay:
[
param
,
grad
]
[
param
,
grad
]
),
framework
.
name_scope
(
'weight decay'
):
),
framework
.
name_scope
(
'weight decay'
):
updated_param
=
paddle
.
subtract
(
x
=
param
,
y
=
scaled_param
)
updated_param
=
paddle
.
subtract
(
x
=
param
,
y
=
scaled_param
)
paddle
.
fluid
.
layers
.
assign
(
input
=
updated_param
,
output
=
param
)
paddle
.
assign
(
updated_param
,
output
=
param
)
optimize_ops
=
self
.
apply_optimize
(
optimize_ops
=
self
.
apply_optimize
(
loss
=
loss
,
loss
=
loss
,
...
...
python/paddle/fluid/contrib/tests/test_multi_precision_fp16_train.py
浏览文件 @
b85af464
...
@@ -295,9 +295,9 @@ class TestAmpWithNonIterableDataLoader(unittest.TestCase):
...
@@ -295,9 +295,9 @@ class TestAmpWithNonIterableDataLoader(unittest.TestCase):
)
)
with
fluid
.
layers
.
control_flow
.
Switch
()
as
switch
:
with
fluid
.
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
label
!=
zero_var
):
with
switch
.
case
(
label
!=
zero_var
):
fluid
.
layers
.
assign
(
input
=
zero_var
,
output
=
label
)
paddle
.
assign
(
zero_var
,
output
=
label
)
with
switch
.
default
():
with
switch
.
default
():
fluid
.
layers
.
assign
(
input
=
one_var
,
output
=
label
)
paddle
.
assign
(
one_var
,
output
=
label
)
net
=
resnet_cifar10
(
image
)
net
=
resnet_cifar10
(
image
)
logits
=
paddle
.
static
.
nn
.
fc
(
logits
=
paddle
.
static
.
nn
.
fc
(
...
...
python/paddle/fluid/contrib/tests/test_weight_decay_extend.py
浏览文件 @
b85af464
...
@@ -182,7 +182,7 @@ class TestWeightDecay(unittest.TestCase):
...
@@ -182,7 +182,7 @@ class TestWeightDecay(unittest.TestCase):
for
params
in
param_list
:
for
params
in
param_list
:
updated_p
=
paddle
.
subtract
(
x
=
params
[
0
],
y
=
params
[
1
])
updated_p
=
paddle
.
subtract
(
x
=
params
[
0
],
y
=
params
[
1
])
fluid
.
layers
.
assign
(
input
=
updated_p
,
output
=
params
[
0
])
paddle
.
assign
(
updated_p
,
output
=
params
[
0
])
optimizer
.
apply_optimize
(
avg_cost
,
startup_prog
,
params_grads
)
optimizer
.
apply_optimize
(
avg_cost
,
startup_prog
,
params_grads
)
...
...
python/paddle/fluid/dygraph/parallel.py
浏览文件 @
b85af464
...
@@ -65,7 +65,7 @@ def _coalesce_tensors(var_groups):
...
@@ -65,7 +65,7 @@ def _coalesce_tensors(var_groups):
flattened_vars
.
append
(
flattened_vars
.
append
(
paddle
.
reshape
(
x
=
g_var
,
shape
=
[
np
.
prod
(
g_var
.
shape
)])
paddle
.
reshape
(
x
=
g_var
,
shape
=
[
np
.
prod
(
g_var
.
shape
)])
)
)
coalesced_grad
=
nn
.
concat
(
flattened_vars
)
coalesced_grad
=
paddle
.
concat
(
flattened_vars
)
coalesced_grads_and_grad_vars
.
append
(
coalesced_grads_and_grad_vars
.
append
(
[
coalesced_grad
,
grad_vars
,
g_var_shapes
]
[
coalesced_grad
,
grad_vars
,
g_var_shapes
]
)
)
...
...
python/paddle/fluid/framework.py
浏览文件 @
b85af464
...
@@ -1703,7 +1703,7 @@ class Variable(metaclass=VariableMetaClass):
...
@@ -1703,7 +1703,7 @@ class Variable(metaclass=VariableMetaClass):
tmp = fluid.dygraph.base.to_variable(x)
tmp = fluid.dygraph.base.to_variable(x)
tmp.stop_gradient=False
tmp.stop_gradient=False
inputs2.append(tmp)
inputs2.append(tmp)
ret2 =
fluid.layers.sums
(inputs2)
ret2 =
paddle.add_n
(inputs2)
loss2 = paddle.sum(ret2)
loss2 = paddle.sum(ret2)
loss2.backward()
loss2.backward()
print(loss2.gradient())
print(loss2.gradient())
...
@@ -1751,7 +1751,7 @@ class Variable(metaclass=VariableMetaClass):
...
@@ -1751,7 +1751,7 @@ class Variable(metaclass=VariableMetaClass):
tmp = fluid.dygraph.base.to_variable(x)
tmp = fluid.dygraph.base.to_variable(x)
tmp.stop_gradient=False
tmp.stop_gradient=False
inputs2.append(tmp)
inputs2.append(tmp)
ret2 =
fluid.layers.sums
(inputs2)
ret2 =
paddle.add_n
(inputs2)
loss2 = paddle.sum(ret2)
loss2 = paddle.sum(ret2)
loss2.backward()
loss2.backward()
print(loss2.gradient())
print(loss2.gradient())
...
...
python/paddle/fluid/incubate/fleet/tests/fleet_deep_ctr.py
浏览文件 @
b85af464
...
@@ -144,7 +144,7 @@ def model():
...
@@ -144,7 +144,7 @@ def model():
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
acc
=
paddle
.
static
.
accuracy
(
input
=
predict
,
label
=
label
)
acc
=
paddle
.
static
.
accuracy
(
input
=
predict
,
label
=
label
)
...
...
python/paddle/fluid/layers/control_flow.py
浏览文件 @
b85af464
...
@@ -15,7 +15,7 @@
...
@@ -15,7 +15,7 @@
from
..wrapped_decorator
import
signature_safe_contextmanager
from
..wrapped_decorator
import
signature_safe_contextmanager
from
.layer_function_generator
import
templatedoc
from
.layer_function_generator
import
templatedoc
from
.tensor
import
assign
,
cast
,
fill_constant
from
.tensor
import
fill_constant
from
..
import
core
from
..
import
core
from
..framework
import
(
from
..framework
import
(
Program
,
Program
,
...
@@ -1058,7 +1058,7 @@ def assign_skip_lod_tensor_array(input, output):
...
@@ -1058,7 +1058,7 @@ def assign_skip_lod_tensor_array(input, output):
if
isinstance
(
output
,
Variable
)
and
isinstance
(
if
isinstance
(
output
,
Variable
)
and
isinstance
(
input
,
support_ret_buildin_type
input
,
support_ret_buildin_type
):
):
assign
(
input
,
output
)
paddle
.
assign
(
input
,
output
)
else
:
else
:
output
=
input
output
=
input
return
return
...
@@ -1069,7 +1069,7 @@ def assign_skip_lod_tensor_array(input, output):
...
@@ -1069,7 +1069,7 @@ def assign_skip_lod_tensor_array(input, output):
main_program
.
current_block
().
parent_idx
main_program
.
current_block
().
parent_idx
)
)
if
parent_block
and
not
parent_block
.
_find_var_recursive
(
input
.
name
):
if
parent_block
and
not
parent_block
.
_find_var_recursive
(
input
.
name
):
assign
(
input
,
output
)
paddle
.
assign
(
input
,
output
)
else
:
else
:
if
(
if
(
isinstance
(
output
,
Variable
)
isinstance
(
output
,
Variable
)
...
@@ -1081,7 +1081,7 @@ def assign_skip_lod_tensor_array(input, output):
...
@@ -1081,7 +1081,7 @@ def assign_skip_lod_tensor_array(input, output):
input
.
shape
,
output
.
shape
input
.
shape
,
output
.
shape
)
)
)
)
assign
(
input
,
output
)
paddle
.
assign
(
input
,
output
)
# (TODO: Mine) There exists dependency (jit.dy2static.convert_operators). It will be removed later.
# (TODO: Mine) There exists dependency (jit.dy2static.convert_operators). It will be removed later.
...
@@ -1195,7 +1195,7 @@ def while_loop(cond, body, loop_vars, is_test=False, name=None):
...
@@ -1195,7 +1195,7 @@ def while_loop(cond, body, loop_vars, is_test=False, name=None):
)
)
now_cond
=
cond
(
*
output_vars
)
now_cond
=
cond
(
*
output_vars
)
map_structure
(
assign_skip_lod_tensor_array
,
output_vars
,
loop_vars
)
map_structure
(
assign_skip_lod_tensor_array
,
output_vars
,
loop_vars
)
assign
(
now_cond
,
pre_cond
)
paddle
.
assign
(
now_cond
,
pre_cond
)
return
loop_vars
return
loop_vars
...
...
python/paddle/fluid/layers/learning_rate_scheduler.py
浏览文件 @
b85af464
...
@@ -55,7 +55,7 @@ def _decay_step_counter(begin=0):
...
@@ -55,7 +55,7 @@ def _decay_step_counter(begin=0):
global_step
=
nn
.
autoincreased_step_counter
(
global_step
=
nn
.
autoincreased_step_counter
(
counter_name
=
'@LR_DECAY_COUNTER@'
,
begin
=
begin
,
step
=
1
counter_name
=
'@LR_DECAY_COUNTER@'
,
begin
=
begin
,
step
=
1
)
)
global_step
=
tensor
.
cast
(
global_step
,
'float32'
)
global_step
=
paddle
.
cast
(
global_step
,
'float32'
)
return
global_step
return
global_step
...
@@ -361,7 +361,7 @@ def polynomial_decay(
...
@@ -361,7 +361,7 @@ def polynomial_decay(
with
control_flow
.
Switch
()
as
switch
:
with
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
global_step
==
zero_var
):
with
switch
.
case
(
global_step
==
zero_var
):
tensor
.
assign
(
input
=
one_var
,
output
=
div_res
)
paddle
.
assign
(
one_var
,
output
=
div_res
)
decay_steps
=
decay_steps
*
div_res
decay_steps
=
decay_steps
*
div_res
else
:
else
:
decay_steps_var
=
tensor
.
fill_constant
(
decay_steps_var
=
tensor
.
fill_constant
(
...
@@ -595,11 +595,11 @@ def linear_lr_warmup(learning_rate, warmup_steps, start_lr, end_lr):
...
@@ -595,11 +595,11 @@ def linear_lr_warmup(learning_rate, warmup_steps, start_lr, end_lr):
decayed_lr
=
start_lr
+
linear_step
*
(
decayed_lr
=
start_lr
+
linear_step
*
(
global_step
/
float
(
warmup_steps
)
global_step
/
float
(
warmup_steps
)
)
)
tensor
.
assign
(
decayed_lr
,
lr
)
paddle
.
assign
(
decayed_lr
,
lr
)
with
switch
.
default
():
with
switch
.
default
():
if
not
isinstance
(
learning_rate
,
Variable
):
if
not
isinstance
(
learning_rate
,
Variable
):
learning_rate
=
tensor
.
fill_constant
(
learning_rate
=
tensor
.
fill_constant
(
shape
=
[
1
],
dtype
=
dtype
,
value
=
float
(
learning_rate
)
shape
=
[
1
],
dtype
=
dtype
,
value
=
float
(
learning_rate
)
)
)
tensor
.
assign
(
learning_rate
,
lr
)
paddle
.
assign
(
learning_rate
,
lr
)
return
lr
return
lr
python/paddle/fluid/layers/nn.py
浏览文件 @
b85af464
...
@@ -41,7 +41,7 @@ from .layer_function_generator import (
...
@@ -41,7 +41,7 @@ from .layer_function_generator import (
templatedoc
,
templatedoc
,
_generate_doc_string_
,
_generate_doc_string_
,
)
)
from
.tensor
import
concat
,
assign
,
fill_constant
,
zeros
from
.tensor
import
fill_constant
,
zeros
from
.
import
utils
from
.
import
utils
from
..
import
unique_name
from
..
import
unique_name
from
..
import
core
from
..
import
core
...
...
python/paddle/fluid/layers/tensor.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/layers/utils.py
浏览文件 @
b85af464
...
@@ -12,6 +12,7 @@
...
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
import
paddle
import
collections
import
collections
import
copy
import
copy
import
numpy
as
np
import
numpy
as
np
...
@@ -387,7 +388,7 @@ def _contain_var(list_or_tuple):
...
@@ -387,7 +388,7 @@ def _contain_var(list_or_tuple):
def
get_shape_tensor_inputs
(
inputs
,
attrs
,
shape
,
op_type
):
def
get_shape_tensor_inputs
(
inputs
,
attrs
,
shape
,
op_type
):
from
.tensor
import
fill_constant
,
cast
from
.tensor
import
fill_constant
def
_get_attr_shape
(
list_shape
):
def
_get_attr_shape
(
list_shape
):
attr_shape
=
[]
attr_shape
=
[]
...
@@ -411,7 +412,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
...
@@ -411,7 +412,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
'(When type of shape in'
+
op_type
+
'is list or tuple.)'
,
'(When type of shape in'
+
op_type
+
'is list or tuple.)'
,
)
)
if
convert_dtype
(
dim
.
dtype
)
==
'int64'
:
if
convert_dtype
(
dim
.
dtype
)
==
'int64'
:
dim
=
cast
(
x
=
dim
,
dtype
=
'int32'
)
dim
=
paddle
.
cast
(
x
=
dim
,
dtype
=
'int32'
)
shape_tensor_list
.
append
(
dim
)
shape_tensor_list
.
append
(
dim
)
else
:
else
:
temp_out
=
fill_constant
([
1
],
'int32'
,
dim
,
force_cpu
=
True
)
temp_out
=
fill_constant
([
1
],
'int32'
,
dim
,
force_cpu
=
True
)
...
@@ -428,7 +429,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
...
@@ -428,7 +429,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
'(When type of shape in'
+
op_type
+
' is Variable.)'
,
'(When type of shape in'
+
op_type
+
' is Variable.)'
,
)
)
if
convert_dtype
(
shape
.
dtype
)
==
'int64'
:
if
convert_dtype
(
shape
.
dtype
)
==
'int64'
:
shape
=
cast
(
shape
,
'int32'
)
shape
=
paddle
.
cast
(
shape
,
'int32'
)
inputs
[
"ShapeTensor"
]
=
shape
inputs
[
"ShapeTensor"
]
=
shape
elif
isinstance
(
shape
,
(
list
,
tuple
)):
elif
isinstance
(
shape
,
(
list
,
tuple
)):
attrs
[
"shape"
]
=
_get_attr_shape
(
shape
)
attrs
[
"shape"
]
=
_get_attr_shape
(
shape
)
...
...
python/paddle/fluid/optimizer.py
浏览文件 @
b85af464
...
@@ -3920,14 +3920,14 @@ class ModelAverage(Optimizer):
...
@@ -3920,14 +3920,14 @@ class ModelAverage(Optimizer):
self
.
_get_accumulator
(
'num_updates'
,
param
)
self
.
_get_accumulator
(
'num_updates'
,
param
)
)
)
# backup param value to grad
# backup param value to grad
layers
.
assign
(
input
=
param
,
output
=
grad
)
paddle
.
assign
(
param
,
output
=
grad
)
# param = (sum_1 + sum_2 + sum_3) / (num_accumulates + old_num_accumulates)
# param = (sum_1 + sum_2 + sum_3) / (num_accumulates + old_num_accumulates)
tmp
=
paddle
.
add_n
([
num_accumulates
,
old_num_accumulates
])
tmp
=
paddle
.
add_n
([
num_accumulates
,
old_num_accumulates
])
sum
=
paddle
.
add_n
([
sum_1
,
sum_2
,
sum_3
])
sum
=
paddle
.
add_n
([
sum_1
,
sum_2
,
sum_3
])
tmp
=
layers
.
cast
(
tmp
=
paddle
.
cast
(
x
=
tmp
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
x
=
tmp
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
)
)
sum
=
layers
.
cast
(
sum
=
paddle
.
cast
(
x
=
sum
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
x
=
sum
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
)
)
paddle
.
assign
(
paddle
.
divide
(
sum
,
tmp
),
output
=
param
)
paddle
.
assign
(
paddle
.
divide
(
sum
,
tmp
),
output
=
param
)
...
@@ -3935,7 +3935,7 @@ class ModelAverage(Optimizer):
...
@@ -3935,7 +3935,7 @@ class ModelAverage(Optimizer):
def
_add_average_restore_op
(
self
,
block
,
param_grad
):
def
_add_average_restore_op
(
self
,
block
,
param_grad
):
param
=
block
.
_clone_variable
(
param_grad
[
0
])
param
=
block
.
_clone_variable
(
param_grad
[
0
])
grad
=
block
.
_clone_variable
(
param_grad
[
1
])
grad
=
block
.
_clone_variable
(
param_grad
[
1
])
layers
.
assign
(
input
=
grad
,
output
=
param
)
paddle
.
assign
(
grad
,
output
=
param
)
def
_append_average_accumulate_op
(
self
,
param
):
def
_append_average_accumulate_op
(
self
,
param
):
self
.
helper
=
LayerHelper
(
"average_accumulate"
)
self
.
helper
=
LayerHelper
(
"average_accumulate"
)
...
@@ -4229,15 +4229,13 @@ class ExponentialMovingAverage:
...
@@ -4229,15 +4229,13 @@ class ExponentialMovingAverage:
param
=
block
.
_clone_variable
(
param
)
param
=
block
.
_clone_variable
(
param
)
tmp
=
block
.
_clone_variable
(
tmp
)
tmp
=
block
.
_clone_variable
(
tmp
)
ema
=
block
.
_clone_variable
(
self
.
_ema_vars
[
param
.
name
])
ema
=
block
.
_clone_variable
(
self
.
_ema_vars
[
param
.
name
])
layers
.
assign
(
input
=
param
,
output
=
tmp
)
paddle
.
assign
(
param
,
output
=
tmp
)
# bias correction
# bias correction
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
global_step
>
0
):
with
switch
.
case
(
global_step
>
0
):
layers
.
assign
(
paddle
.
assign
(
ema
/
(
1.0
-
decay_pow
),
output
=
param
)
output
=
param
,
input
=
ema
/
(
1.0
-
decay_pow
)
)
with
switch
.
default
():
with
switch
.
default
():
layers
.
assign
(
output
=
param
,
input
=
ema
)
paddle
.
assign
(
ema
,
output
=
param
)
self
.
restore_program
=
Program
()
self
.
restore_program
=
Program
()
block
=
self
.
restore_program
.
global_block
()
block
=
self
.
restore_program
.
global_block
()
...
@@ -4245,7 +4243,7 @@ class ExponentialMovingAverage:
...
@@ -4245,7 +4243,7 @@ class ExponentialMovingAverage:
for
param
,
tmp
in
self
.
_params_tmps
:
for
param
,
tmp
in
self
.
_params_tmps
:
tmp
=
block
.
_clone_variable
(
tmp
)
tmp
=
block
.
_clone_variable
(
tmp
)
param
=
block
.
_clone_variable
(
param
)
param
=
block
.
_clone_variable
(
param
)
layers
.
assign
(
input
=
tmp
,
output
=
param
)
paddle
.
assign
(
tmp
,
output
=
param
)
def
_get_ema_decay
(
self
):
def
_get_ema_decay
(
self
):
with
default_main_program
().
_lr_schedule_guard
():
with
default_main_program
().
_lr_schedule_guard
():
...
@@ -4261,9 +4259,9 @@ class ExponentialMovingAverage:
...
@@ -4261,9 +4259,9 @@ class ExponentialMovingAverage:
decay_t
=
(
self
.
_thres_steps
+
1.0
)
/
(
self
.
_thres_steps
+
10.0
)
decay_t
=
(
self
.
_thres_steps
+
1.0
)
/
(
self
.
_thres_steps
+
10.0
)
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
decay_t
<
self
.
_decay
):
with
switch
.
case
(
decay_t
<
self
.
_decay
):
layers
.
tensor
.
assign
(
decay_t
,
decay_var
)
paddle
.
assign
(
decay_t
,
decay_var
)
with
switch
.
default
():
with
switch
.
default
():
layers
.
tensor
.
assign
(
paddle
.
assign
(
np
.
array
([
self
.
_decay
],
dtype
=
np
.
float32
),
decay_var
np
.
array
([
self
.
_decay
],
dtype
=
np
.
float32
),
decay_var
)
)
return
decay_var
return
decay_var
...
@@ -4276,7 +4274,7 @@ class ExponentialMovingAverage:
...
@@ -4276,7 +4274,7 @@ class ExponentialMovingAverage:
dtype
=
'int64'
,
dtype
=
'int64'
,
persistable
=
True
,
persistable
=
True
,
)
)
global_step
=
layers
.
cast
(
global_step
,
"float32"
)
global_step
=
paddle
.
cast
(
global_step
,
"float32"
)
decay_var
=
block
.
_clone_variable
(
self
.
_decay_var
)
decay_var
=
block
.
_clone_variable
(
self
.
_decay_var
)
decay_pow_acc
=
paddle
.
pow
(
decay_var
,
global_step
)
decay_pow_acc
=
paddle
.
pow
(
decay_var
,
global_step
)
return
decay_pow_acc
,
global_step
return
decay_pow_acc
,
global_step
...
@@ -4313,7 +4311,7 @@ class ExponentialMovingAverage:
...
@@ -4313,7 +4311,7 @@ class ExponentialMovingAverage:
ema_t
=
param_ema
*
self
.
_decay_var
+
param
*
(
ema_t
=
param_ema
*
self
.
_decay_var
+
param
*
(
1
-
self
.
_decay_var
1
-
self
.
_decay_var
)
)
layers
.
assign
(
input
=
ema_t
,
output
=
param_ema
)
paddle
.
assign
(
ema_t
,
output
=
param_ema
)
# for fp16 params
# for fp16 params
for
param_ema
,
master_ema
in
param_master_emas
:
for
param_ema
,
master_ema
in
param_master_emas
:
...
@@ -7272,7 +7270,7 @@ class LookaheadOptimizer:
...
@@ -7272,7 +7270,7 @@ class LookaheadOptimizer:
for
param_name
in
params
:
for
param_name
in
params
:
fast_var
=
main_block
.
var
(
param_name
)
fast_var
=
main_block
.
var
(
param_name
)
slow_var
=
param_to_slow
[
param_name
]
slow_var
=
param_to_slow
[
param_name
]
layers
.
assign
(
input
=
fast_var
,
output
=
slow_var
)
paddle
.
assign
(
fast_var
,
output
=
slow_var
)
with
switch
.
case
(
mod
==
zero_var
):
with
switch
.
case
(
mod
==
zero_var
):
for
param_name
in
params
:
for
param_name
in
params
:
fast_var
=
main_block
.
var
(
param_name
)
fast_var
=
main_block
.
var
(
param_name
)
...
@@ -7283,8 +7281,8 @@ class LookaheadOptimizer:
...
@@ -7283,8 +7281,8 @@ class LookaheadOptimizer:
slow_var
,
paddle
.
subtract
(
one_var
,
alpha
)
slow_var
,
paddle
.
subtract
(
one_var
,
alpha
)
),
),
)
)
layers
.
assign
(
input
=
tmp_var
,
output
=
slow_var
)
paddle
.
assign
(
tmp_var
,
output
=
slow_var
)
layers
.
assign
(
input
=
tmp_var
,
output
=
fast_var
)
paddle
.
assign
(
tmp_var
,
output
=
fast_var
)
with
switch
.
default
():
with
switch
.
default
():
pass
pass
return
mini_out
return
mini_out
...
...
python/paddle/fluid/tests/book/test_recommender_system.py
浏览文件 @
b85af464
...
@@ -91,8 +91,8 @@ def get_usr_combined_features():
...
@@ -91,8 +91,8 @@ def get_usr_combined_features():
usr_job_fc
=
paddle
.
static
.
nn
.
fc
(
x
=
usr_job_emb
,
size
=
16
)
usr_job_fc
=
paddle
.
static
.
nn
.
fc
(
x
=
usr_job_emb
,
size
=
16
)
concat_embed
=
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
usr_fc
,
usr_gender_fc
,
usr_age_fc
,
usr_job_fc
],
axis
=
1
[
usr_fc
,
usr_gender_fc
,
usr_age_fc
,
usr_job_fc
],
axis
=
1
)
)
usr_combined_features
=
paddle
.
static
.
nn
.
fc
(
usr_combined_features
=
paddle
.
static
.
nn
.
fc
(
...
@@ -150,8 +150,8 @@ def get_mov_combined_features():
...
@@ -150,8 +150,8 @@ def get_mov_combined_features():
pool_type
=
"sum"
,
pool_type
=
"sum"
,
)
)
concat_embed
=
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
mov_fc
,
mov_categories_hidden
,
mov_title_conv
],
axis
=
1
[
mov_fc
,
mov_categories_hidden
,
mov_title_conv
],
axis
=
1
)
)
# FIXME(dzh) : need tanh operator
# FIXME(dzh) : need tanh operator
...
...
python/paddle/fluid/tests/book/test_word2vec_book.py
浏览文件 @
b85af464
...
@@ -87,8 +87,8 @@ def train(
...
@@ -87,8 +87,8 @@ def train(
param_attr
=
'shared_w'
,
param_attr
=
'shared_w'
,
)
)
concat_embed
=
fluid
.
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
)
)
hidden1
=
paddle
.
static
.
nn
.
fc
(
hidden1
=
paddle
.
static
.
nn
.
fc
(
x
=
concat_embed
,
size
=
HIDDEN_SIZE
,
activation
=
'sigmoid'
x
=
concat_embed
,
size
=
HIDDEN_SIZE
,
activation
=
'sigmoid'
...
...
python/paddle/fluid/tests/unittests/auto_parallel/test_fp16_assign.py
浏览文件 @
b85af464
...
@@ -58,7 +58,7 @@ def make_program():
...
@@ -58,7 +58,7 @@ def make_program():
)
)
where_1
=
paddle
.
where
(
y
>
1
,
y
,
out1
)
where_1
=
paddle
.
where
(
y
>
1
,
y
,
out1
)
paddle
.
fluid
.
layers
.
assign
(
where_1
,
where_0
)
paddle
.
assign
(
where_1
,
where_0
)
return
main_program
,
start_program
return
main_program
,
start_program
...
...
python/paddle/fluid/tests/unittests/check_nan_inf_base.py
浏览文件 @
b85af464
...
@@ -53,7 +53,7 @@ def net():
...
@@ -53,7 +53,7 @@ def net():
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
# test float16 value
# test float16 value
fp16_zero
=
fluid
.
layers
.
cast
(
zero
,
dtype
=
'float16'
)
fp16_zero
=
paddle
.
cast
(
zero
,
dtype
=
'float16'
)
y
=
y
+
zero
y
=
y
+
zero
...
...
python/paddle/fluid/tests/unittests/collective/collective_sendrecv_op_array.py
浏览文件 @
b85af464
...
@@ -35,19 +35,11 @@ class TestCollectiveSendRecv(TestCollectiveRunnerBase):
...
@@ -35,19 +35,11 @@ class TestCollectiveSendRecv(TestCollectiveRunnerBase):
)
)
tindata
.
desc
.
set_need_check_feed
(
False
)
tindata
.
desc
.
set_need_check_feed
(
False
)
if
self
.
rank
==
0
:
if
self
.
rank
==
0
:
data1
=
fluid
.
layers
.
assign
(
data1
=
paddle
.
assign
(
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
))
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
)
data2
=
paddle
.
assign
(
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
))
)
data2
=
fluid
.
layers
.
assign
(
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
)
)
elif
self
.
rank
==
1
:
elif
self
.
rank
==
1
:
data1
=
fluid
.
layers
.
assign
(
data1
=
paddle
.
assign
(
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
))
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
)
data2
=
paddle
.
assign
(
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
))
)
data2
=
fluid
.
layers
.
assign
(
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
)
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
paddle
.
tensor
.
array_write
(
data1
,
i
,
tensor_array
)
paddle
.
tensor
.
array_write
(
data1
,
i
,
tensor_array
)
...
...
python/paddle/fluid/tests/unittests/collective/fleet/hybrid_parallel_inference_helper.py
浏览文件 @
b85af464
...
@@ -122,19 +122,19 @@ class TestHybridParallelInferenceHelperClass(unittest.TestCase):
...
@@ -122,19 +122,19 @@ class TestHybridParallelInferenceHelperClass(unittest.TestCase):
# update cond and assign to cond_int, we will sync cond_int
# update cond and assign to cond_int, we will sync cond_int
paddle
.
assign
(
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
),
cond
)
paddle
.
assign
(
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
),
cond
)
layers
.
assign
(
layers
.
cast
(
cond
,
dtype
=
"int32"
),
cond_int
)
paddle
.
assign
(
paddle
.
cast
(
cond
,
dtype
=
"int32"
),
cond_int
)
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
# the code below must at end of while block and exists in device:all
# the code below must at end of while block and exists in device:all
layers
.
assign
(
layers
.
cast
(
cond_int
,
dtype
=
'bool'
),
cond
)
paddle
.
assign
(
paddle
.
cast
(
cond_int
,
dtype
=
'bool'
),
cond
)
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
out
=
paddle
.
tensor
.
create_array
(
data
.
dtype
)
out
=
paddle
.
tensor
.
create_array
(
data
.
dtype
)
layers
.
assign
(
data
,
out
)
paddle
.
assign
(
data
,
out
)
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
# use a empty lod_tensor_array to clear lod_tensor_array
# use a empty lod_tensor_array to clear lod_tensor_array
layers
.
assign
(
paddle
.
tensor
.
create_array
(
data
.
dtype
),
data
)
paddle
.
assign
(
paddle
.
tensor
.
create_array
(
data
.
dtype
),
data
)
helper
=
HybridParallelInferenceHelper
(
helper
=
HybridParallelInferenceHelper
(
startup_program
,
startup_program
,
...
...
python/paddle/fluid/tests/unittests/dist_ctr.py
浏览文件 @
b85af464
...
@@ -95,7 +95,7 @@ class TestDistCTR2x2(TestDistRunnerBase):
...
@@ -95,7 +95,7 @@ class TestDistCTR2x2(TestDistRunnerBase):
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
...
...
python/paddle/fluid/tests/unittests/dist_fleet_ctr.py
浏览文件 @
b85af464
...
@@ -144,7 +144,7 @@ class TestDistCTR2x2(FleetDistRunnerBase):
...
@@ -144,7 +144,7 @@ class TestDistCTR2x2(FleetDistRunnerBase):
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
...
...
python/paddle/fluid/tests/unittests/dist_fleet_heter_pipeline_ctr.py
浏览文件 @
b85af464
...
@@ -116,8 +116,8 @@ class TestHeterPipelinePsCTR2x2(FleetDistHeterRunnerBase):
...
@@ -116,8 +116,8 @@ class TestHeterPipelinePsCTR2x2(FleetDistHeterRunnerBase):
dnn_out
=
fc
dnn_out
=
fc
with
fluid
.
device_guard
(
"cpu"
):
with
fluid
.
device_guard
(
"cpu"
):
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"int64"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"int64"
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/dist_fleet_simnet_bow.py
浏览文件 @
b85af464
...
@@ -55,7 +55,7 @@ def fake_simnet_reader():
...
@@ -55,7 +55,7 @@ def fake_simnet_reader():
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/dist_fleet_sparse_embedding_ctr.py
浏览文件 @
b85af464
...
@@ -134,7 +134,8 @@ class TestDistCTR2x2(FleetDistRunnerBase):
...
@@ -134,7 +134,8 @@ class TestDistCTR2x2(FleetDistRunnerBase):
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
([
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/dist_transformer.py
浏览文件 @
b85af464
...
@@ -1193,8 +1193,8 @@ def multi_head_attention(
...
@@ -1193,8 +1193,8 @@ def multi_head_attention(
q
,
k
,
v
=
__compute_qkv
(
queries
,
keys
,
values
,
n_head
,
d_key
,
d_value
)
q
,
k
,
v
=
__compute_qkv
(
queries
,
keys
,
values
,
n_head
,
d_key
,
d_value
)
if
cache
is
not
None
:
# use cache and concat time steps
if
cache
is
not
None
:
# use cache and concat time steps
k
=
cache
[
"k"
]
=
layers
.
concat
([
cache
[
"k"
],
k
],
axis
=
1
)
k
=
cache
[
"k"
]
=
paddle
.
concat
([
cache
[
"k"
],
k
],
axis
=
1
)
v
=
cache
[
"v"
]
=
layers
.
concat
([
cache
[
"v"
],
v
],
axis
=
1
)
v
=
cache
[
"v"
]
=
paddle
.
concat
([
cache
[
"v"
],
v
],
axis
=
1
)
q
=
__split_heads
(
q
,
n_head
)
q
=
__split_heads
(
q
,
n_head
)
k
=
__split_heads
(
k
,
n_head
)
k
=
__split_heads
(
k
,
n_head
)
...
@@ -1858,11 +1858,11 @@ def fast_decode(
...
@@ -1858,11 +1858,11 @@ def fast_decode(
# update states
# update states
layers
.
array_write
(
selected_ids
,
i
=
step_idx
,
array
=
ids
)
layers
.
array_write
(
selected_ids
,
i
=
step_idx
,
array
=
ids
)
layers
.
array_write
(
selected_scores
,
i
=
step_idx
,
array
=
scores
)
layers
.
array_write
(
selected_scores
,
i
=
step_idx
,
array
=
scores
)
layers
.
assign
(
pre_src_attn_bias
,
trg_src_attn_bias
)
paddle
.
assign
(
pre_src_attn_bias
,
trg_src_attn_bias
)
layers
.
assign
(
pre_enc_output
,
enc_output
)
paddle
.
assign
(
pre_enc_output
,
enc_output
)
for
i
in
range
(
n_layer
):
for
i
in
range
(
n_layer
):
layers
.
assign
(
pre_caches
[
i
][
"k"
],
caches
[
i
][
"k"
])
paddle
.
assign
(
pre_caches
[
i
][
"k"
],
caches
[
i
][
"k"
])
layers
.
assign
(
pre_caches
[
i
][
"v"
],
caches
[
i
][
"v"
])
paddle
.
assign
(
pre_caches
[
i
][
"v"
],
caches
[
i
][
"v"
])
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
finish_cond
=
paddle
.
logical_not
(
layers
.
is_empty
(
x
=
selected_ids
))
finish_cond
=
paddle
.
logical_not
(
layers
.
is_empty
(
x
=
selected_ids
))
paddle
.
logical_and
(
x
=
length_cond
,
y
=
finish_cond
,
out
=
cond
)
paddle
.
logical_and
(
x
=
length_cond
,
y
=
finish_cond
,
out
=
cond
)
...
...
python/paddle/fluid/tests/unittests/dist_word2vec.py
浏览文件 @
b85af464
...
@@ -75,8 +75,8 @@ class TestDistWord2vec2x2(TestDistRunnerBase):
...
@@ -75,8 +75,8 @@ class TestDistWord2vec2x2(TestDistRunnerBase):
),
),
)
)
concat_embed
=
fluid
.
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
,
axis
=
1
,
)
)
hidden1
=
paddle
.
static
.
nn
.
fc
(
hidden1
=
paddle
.
static
.
nn
.
fc
(
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/bert_dygraph_model.py
浏览文件 @
b85af464
...
@@ -384,7 +384,7 @@ class PretrainModelLayer(Layer):
...
@@ -384,7 +384,7 @@ class PretrainModelLayer(Layer):
mask_pos
,
mask_pos
,
labels
,
labels
,
):
):
mask_pos
=
fluid
.
layers
.
cast
(
x
=
mask_pos
,
dtype
=
'int32'
)
mask_pos
=
paddle
.
cast
(
x
=
mask_pos
,
dtype
=
'int32'
)
enc_output
,
next_sent_feat
=
self
.
bert_layer
(
enc_output
,
next_sent_feat
=
self
.
bert_layer
(
src_ids
,
position_ids
,
sentence_ids
,
input_mask
src_ids
,
position_ids
,
sentence_ids
,
input_mask
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/seq2seq_dygraph_model.py
浏览文件 @
b85af464
...
@@ -18,7 +18,7 @@ from seq2seq_utils import Seq2SeqModelHyperParams as args
...
@@ -18,7 +18,7 @@ from seq2seq_utils import Seq2SeqModelHyperParams as args
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
paddle.fluid
import
ParamAttr
,
layers
from
paddle.fluid
import
ParamAttr
from
paddle.fluid.dygraph
import
Layer
from
paddle.fluid.dygraph
import
Layer
from
paddle.fluid.dygraph.base
import
to_variable
from
paddle.fluid.dygraph.base
import
to_variable
from
paddle.jit.api
import
to_static
from
paddle.jit.api
import
to_static
...
@@ -67,7 +67,7 @@ class BasicLSTMUnit(Layer):
...
@@ -67,7 +67,7 @@ class BasicLSTMUnit(Layer):
)
)
def
forward
(
self
,
input
,
pre_hidden
,
pre_cell
):
def
forward
(
self
,
input
,
pre_hidden
,
pre_cell
):
concat_input_hidden
=
layers
.
concat
([
input
,
pre_hidden
],
1
)
concat_input_hidden
=
paddle
.
concat
([
input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
concat_input_hidden
,
y
=
self
.
_weight
)
gate_input
=
paddle
.
matmul
(
x
=
concat_input_hidden
,
y
=
self
.
_weight
)
gate_input
=
paddle
.
add
(
gate_input
,
self
.
_bias
)
gate_input
=
paddle
.
add
(
gate_input
,
self
.
_bias
)
...
@@ -488,12 +488,12 @@ class BaseModel(fluid.dygraph.Layer):
...
@@ -488,12 +488,12 @@ class BaseModel(fluid.dygraph.Layer):
self
.
_gather
(
x
,
beam_indices
,
batch_pos
)
for
x
in
new_dec_cell
self
.
_gather
(
x
,
beam_indices
,
batch_pos
)
for
x
in
new_dec_cell
]
]
next_finished
=
self
.
_gather
(
beam_finished
,
beam_indices
,
batch_pos
)
next_finished
=
self
.
_gather
(
beam_finished
,
beam_indices
,
batch_pos
)
next_finished
=
fluid
.
layers
.
cast
(
next_finished
,
"bool"
)
next_finished
=
paddle
.
cast
(
next_finished
,
"bool"
)
next_finished
=
paddle
.
logical_or
(
next_finished
=
paddle
.
logical_or
(
next_finished
,
next_finished
,
paddle
.
equal
(
token_indices
,
end_token_tensor
),
paddle
.
equal
(
token_indices
,
end_token_tensor
),
)
)
next_finished
=
fluid
.
layers
.
cast
(
next_finished
,
"float32"
)
next_finished
=
paddle
.
cast
(
next_finished
,
"float32"
)
dec_hidden
,
dec_cell
=
new_dec_hidden
,
new_dec_cell
dec_hidden
,
dec_cell
=
new_dec_hidden
,
new_dec_cell
beam_finished
=
next_finished
beam_finished
=
next_finished
...
@@ -808,7 +808,7 @@ class AttentionModel(fluid.dygraph.Layer):
...
@@ -808,7 +808,7 @@ class AttentionModel(fluid.dygraph.Layer):
for
step_idx
in
range
(
max_seq_len
):
for
step_idx
in
range
(
max_seq_len
):
j
=
step_idx
+
0
j
=
step_idx
+
0
step_input
=
tar_emb
[
j
]
step_input
=
tar_emb
[
j
]
step_input
=
fluid
.
layers
.
concat
([
step_input
,
input_feed
],
1
)
step_input
=
paddle
.
concat
([
step_input
,
input_feed
],
1
)
new_dec_hidden
,
new_dec_cell
=
[],
[]
new_dec_hidden
,
new_dec_cell
=
[],
[]
for
i
in
range
(
self
.
num_layers
):
for
i
in
range
(
self
.
num_layers
):
new_hidden
,
new_cell
=
self
.
dec_units
[
i
](
new_hidden
,
new_cell
=
self
.
dec_units
[
i
](
...
@@ -826,7 +826,7 @@ class AttentionModel(fluid.dygraph.Layer):
...
@@ -826,7 +826,7 @@ class AttentionModel(fluid.dygraph.Layer):
step_input
=
new_hidden
step_input
=
new_hidden
dec_att
=
self
.
attention
(
step_input
,
enc_outputs
,
enc_padding_mask
)
dec_att
=
self
.
attention
(
step_input
,
enc_outputs
,
enc_padding_mask
)
dec_att
=
paddle
.
squeeze
(
dec_att
,
[
1
])
dec_att
=
paddle
.
squeeze
(
dec_att
,
[
1
])
concat_att_out
=
fluid
.
layers
.
concat
([
dec_att
,
step_input
],
1
)
concat_att_out
=
paddle
.
concat
([
dec_att
,
step_input
],
1
)
out
=
self
.
concat_fc
(
concat_att_out
)
out
=
self
.
concat_fc
(
concat_att_out
)
input_feed
=
out
input_feed
=
out
dec_output
.
append
(
out
)
dec_output
.
append
(
out
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/simnet_dygraph_model.py
浏览文件 @
b85af464
...
@@ -97,7 +97,7 @@ class ConcatLayer:
...
@@ -97,7 +97,7 @@ class ConcatLayer:
"""
"""
operation
operation
"""
"""
concat
=
fluid
.
layers
.
concat
(
inputs
,
axis
=
self
.
axis
)
concat
=
paddle
.
concat
(
inputs
,
axis
=
self
.
axis
)
return
concat
return
concat
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ast_util.py
浏览文件 @
b85af464
...
@@ -68,7 +68,7 @@ class TestAST2Func(unittest.TestCase):
...
@@ -68,7 +68,7 @@ class TestAST2Func(unittest.TestCase):
x_data
=
np
.
random
.
random
([
10
,
16
]).
astype
(
'float32'
)
x_data
=
np
.
random
.
random
([
10
,
16
]).
astype
(
'float32'
)
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x_v
=
fluid
.
layers
.
assign
(
x_data
)
x_v
=
paddle
.
assign
(
x_data
)
true_ret
=
func
(
x_v
)
true_ret
=
func
(
x_v
)
test_ret
=
self
.
_ast2func
(
func
)(
x_v
)
test_ret
=
self
.
_ast2func
(
func
)(
x_v
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_basic_api_transformation.py
浏览文件 @
b85af464
...
@@ -252,7 +252,7 @@ class TestDygraphBasicApi(unittest.TestCase):
...
@@ -252,7 +252,7 @@ class TestDygraphBasicApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
main_program
.
random_seed
=
SEED
main_program
.
random_seed
=
SEED
with
fluid
.
program_guard
(
main_program
,
startup_program
):
with
fluid
.
program_guard
(
main_program
,
startup_program
):
data
=
fluid
.
layers
.
assign
(
self
.
input
)
data
=
paddle
.
assign
(
self
.
input
)
static_out
=
dygraph_to_static_func
(
self
.
dygraph_func
)(
data
)
static_out
=
dygraph_to_static_func
(
self
.
dygraph_func
)(
data
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_bmn.py
浏览文件 @
b85af464
...
@@ -310,7 +310,7 @@ def bmn_loss_func(
...
@@ -310,7 +310,7 @@ def bmn_loss_func(
self_bm_mask
=
paddle
.
static
.
create_global_var
(
self_bm_mask
=
paddle
.
static
.
create_global_var
(
shape
=
[
dscale
,
tscale
],
value
=
0
,
dtype
=
DATATYPE
,
persistable
=
True
shape
=
[
dscale
,
tscale
],
value
=
0
,
dtype
=
DATATYPE
,
persistable
=
True
)
)
fluid
.
layers
.
assign
(
bm_mask
,
self_bm_mask
)
paddle
.
assign
(
bm_mask
,
self_bm_mask
)
self_bm_mask
.
stop_gradient
=
True
self_bm_mask
.
stop_gradient
=
True
return
self_bm_mask
return
self_bm_mask
...
@@ -319,9 +319,9 @@ def bmn_loss_func(
...
@@ -319,9 +319,9 @@ def bmn_loss_func(
pred_score
=
paddle
.
reshape
(
x
=
pred_score
,
shape
=
[
-
1
])
pred_score
=
paddle
.
reshape
(
x
=
pred_score
,
shape
=
[
-
1
])
gt_label
=
paddle
.
reshape
(
x
=
gt_label
,
shape
=
[
-
1
])
gt_label
=
paddle
.
reshape
(
x
=
gt_label
,
shape
=
[
-
1
])
gt_label
.
stop_gradient
=
True
gt_label
.
stop_gradient
=
True
pmask
=
fluid
.
layers
.
cast
(
x
=
(
gt_label
>
0.5
),
dtype
=
DATATYPE
)
pmask
=
paddle
.
cast
(
x
=
(
gt_label
>
0.5
),
dtype
=
DATATYPE
)
num_entries
=
fluid
.
layers
.
cast
(
paddle
.
shape
(
pmask
),
dtype
=
DATATYPE
)
num_entries
=
paddle
.
cast
(
paddle
.
shape
(
pmask
),
dtype
=
DATATYPE
)
num_positive
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
pmask
),
dtype
=
DATATYPE
)
num_positive
=
paddle
.
cast
(
paddle
.
sum
(
pmask
),
dtype
=
DATATYPE
)
ratio
=
num_entries
/
num_positive
ratio
=
num_entries
/
num_positive
coef_0
=
0.5
*
ratio
/
(
ratio
-
1
)
coef_0
=
0.5
*
ratio
/
(
ratio
-
1
)
coef_1
=
0.5
*
ratio
coef_1
=
0.5
*
ratio
...
@@ -345,34 +345,34 @@ def bmn_loss_func(
...
@@ -345,34 +345,34 @@ def bmn_loss_func(
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
u_hmask
=
fluid
.
layers
.
cast
(
x
=
gt_iou_map
>
0.7
,
dtype
=
DATATYPE
)
u_hmask
=
paddle
.
cast
(
x
=
gt_iou_map
>
0.7
,
dtype
=
DATATYPE
)
u_mmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.7
,
gt_iou_map
>
0.3
)
u_mmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.7
,
gt_iou_map
>
0.3
)
u_mmask
=
fluid
.
layers
.
cast
(
x
=
u_mmask
,
dtype
=
DATATYPE
)
u_mmask
=
paddle
.
cast
(
x
=
u_mmask
,
dtype
=
DATATYPE
)
u_lmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.3
,
gt_iou_map
>=
0.0
)
u_lmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.3
,
gt_iou_map
>=
0.0
)
u_lmask
=
fluid
.
layers
.
cast
(
x
=
u_lmask
,
dtype
=
DATATYPE
)
u_lmask
=
paddle
.
cast
(
x
=
u_lmask
,
dtype
=
DATATYPE
)
u_lmask
=
paddle
.
multiply
(
u_lmask
,
mask
)
u_lmask
=
paddle
.
multiply
(
u_lmask
,
mask
)
num_h
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
u_hmask
),
dtype
=
DATATYPE
)
num_h
=
paddle
.
cast
(
paddle
.
sum
(
u_hmask
),
dtype
=
DATATYPE
)
num_m
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
u_mmask
),
dtype
=
DATATYPE
)
num_m
=
paddle
.
cast
(
paddle
.
sum
(
u_mmask
),
dtype
=
DATATYPE
)
num_l
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
u_lmask
),
dtype
=
DATATYPE
)
num_l
=
paddle
.
cast
(
paddle
.
sum
(
u_lmask
),
dtype
=
DATATYPE
)
r_m
=
num_h
/
num_m
r_m
=
num_h
/
num_m
u_smmask
=
fluid
.
layers
.
assign
(
u_smmask
=
paddle
.
assign
(
local_random
.
uniform
(
local_random
.
uniform
(
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
).
astype
(
DATATYPE
)
).
astype
(
DATATYPE
)
)
)
u_smmask
=
paddle
.
multiply
(
u_mmask
,
u_smmask
)
u_smmask
=
paddle
.
multiply
(
u_mmask
,
u_smmask
)
u_smmask
=
fluid
.
layers
.
cast
(
x
=
(
u_smmask
>
(
1.0
-
r_m
)),
dtype
=
DATATYPE
)
u_smmask
=
paddle
.
cast
(
x
=
(
u_smmask
>
(
1.0
-
r_m
)),
dtype
=
DATATYPE
)
r_l
=
num_h
/
num_l
r_l
=
num_h
/
num_l
u_slmask
=
fluid
.
layers
.
assign
(
u_slmask
=
paddle
.
assign
(
local_random
.
uniform
(
local_random
.
uniform
(
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
).
astype
(
DATATYPE
)
).
astype
(
DATATYPE
)
)
)
u_slmask
=
paddle
.
multiply
(
u_lmask
,
u_slmask
)
u_slmask
=
paddle
.
multiply
(
u_lmask
,
u_slmask
)
u_slmask
=
fluid
.
layers
.
cast
(
x
=
(
u_slmask
>
(
1.0
-
r_l
)),
dtype
=
DATATYPE
)
u_slmask
=
paddle
.
cast
(
x
=
(
u_slmask
>
(
1.0
-
r_l
)),
dtype
=
DATATYPE
)
weights
=
u_hmask
+
u_smmask
+
u_slmask
weights
=
u_hmask
+
u_smmask
+
u_slmask
weights
.
stop_gradient
=
True
weights
.
stop_gradient
=
True
...
@@ -385,8 +385,8 @@ def bmn_loss_func(
...
@@ -385,8 +385,8 @@ def bmn_loss_func(
def
pem_cls_loss_func
(
pred_score
,
gt_iou_map
,
mask
):
def
pem_cls_loss_func
(
pred_score
,
gt_iou_map
,
mask
):
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
gt_iou_map
.
stop_gradient
=
True
gt_iou_map
.
stop_gradient
=
True
pmask
=
fluid
.
layers
.
cast
(
x
=
(
gt_iou_map
>
0.9
),
dtype
=
DATATYPE
)
pmask
=
paddle
.
cast
(
x
=
(
gt_iou_map
>
0.9
),
dtype
=
DATATYPE
)
nmask
=
fluid
.
layers
.
cast
(
x
=
(
gt_iou_map
<=
0.9
),
dtype
=
DATATYPE
)
nmask
=
paddle
.
cast
(
x
=
(
gt_iou_map
<=
0.9
),
dtype
=
DATATYPE
)
nmask
=
paddle
.
multiply
(
nmask
,
mask
)
nmask
=
paddle
.
multiply
(
nmask
,
mask
)
num_positive
=
paddle
.
sum
(
pmask
)
num_positive
=
paddle
.
sum
(
pmask
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_convert_call.py
浏览文件 @
b85af464
...
@@ -141,7 +141,7 @@ class MyConvLayer(fluid.dygraph.Layer):
...
@@ -141,7 +141,7 @@ class MyConvLayer(fluid.dygraph.Layer):
@
paddle
.
jit
.
to_static
@
paddle
.
jit
.
to_static
def
dymethod
(
self
,
x_v
):
def
dymethod
(
self
,
x_v
):
x_v
=
fluid
.
layers
.
assign
(
x_v
)
x_v
=
paddle
.
assign
(
x_v
)
return
x_v
return
x_v
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_lac.py
浏览文件 @
b85af464
...
@@ -86,7 +86,7 @@ class DynamicGRU(fluid.dygraph.Layer):
...
@@ -86,7 +86,7 @@ class DynamicGRU(fluid.dygraph.Layer):
if
self
.
is_reverse
:
if
self
.
is_reverse
:
res
=
res
[::
-
1
]
res
=
res
[::
-
1
]
res
=
fluid
.
layers
.
concat
(
res
,
axis
=
1
)
res
=
paddle
.
concat
(
res
,
axis
=
1
)
return
res
return
res
...
@@ -154,7 +154,7 @@ class BiGRU(fluid.dygraph.Layer):
...
@@ -154,7 +154,7 @@ class BiGRU(fluid.dygraph.Layer):
res_pre_gru_r
=
self
.
pre_gru_r
(
input_feature
)
res_pre_gru_r
=
self
.
pre_gru_r
(
input_feature
)
res_gru_r
=
self
.
gru_r
(
res_pre_gru_r
)
res_gru_r
=
self
.
gru_r
(
res_pre_gru_r
)
bi_merge
=
fluid
.
layers
.
concat
(
input
=
[
res_gru
,
res_gru_r
],
axis
=-
1
)
bi_merge
=
paddle
.
concat
(
[
res_gru
,
res_gru_r
],
axis
=-
1
)
return
bi_merge
return
bi_merge
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_list.py
浏览文件 @
b85af464
...
@@ -94,7 +94,7 @@ def test_list_append_in_for_loop_with_concat(x, iter_num):
...
@@ -94,7 +94,7 @@ def test_list_append_in_for_loop_with_concat(x, iter_num):
)
# TODO(liym27): Delete it if the type of parameter iter_num can be resolved
)
# TODO(liym27): Delete it if the type of parameter iter_num can be resolved
for
i
in
range
(
iter_num
):
for
i
in
range
(
iter_num
):
a
.
append
(
x
)
a
.
append
(
x
)
a
=
fluid
.
layers
.
concat
(
a
,
axis
=
0
)
a
=
paddle
.
concat
(
a
,
axis
=
0
)
return
a
return
a
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ptb_lm.py
浏览文件 @
b85af464
...
@@ -89,7 +89,7 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -89,7 +89,7 @@ class SimpleLSTMRNN(fluid.Layer):
weight_1
=
self
.
weight_1_arr
[
k
]
weight_1
=
self
.
weight_1_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
nn
=
fluid
.
layers
.
concat
([
step_input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
step_input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -111,16 +111,16 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -111,16 +111,16 @@ class SimpleLSTMRNN(fluid.Layer):
mode
=
'upscale_in_train'
,
mode
=
'upscale_in_train'
,
)
)
res
.
append
(
step_input
)
res
.
append
(
step_input
)
real_res
=
fluid
.
layers
.
concat
(
res
,
1
)
real_res
=
paddle
.
concat
(
res
,
1
)
real_res
=
paddle
.
reshape
(
real_res
=
paddle
.
reshape
(
real_res
,
[
-
1
,
self
.
_num_steps
,
self
.
_hidden_size
]
real_res
,
[
-
1
,
self
.
_num_steps
,
self
.
_hidden_size
]
)
)
last_hidden
=
fluid
.
layers
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
fluid
.
layers
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_reinforcement_learning.py
浏览文件 @
b85af464
...
@@ -152,7 +152,7 @@ def train(args, place, to_static):
...
@@ -152,7 +152,7 @@ def train(args, place, to_static):
cur_loss
=
paddle
.
multiply
(
_R
,
log_prob
)
cur_loss
=
paddle
.
multiply
(
_R
,
log_prob
)
policy_loss
.
append
(
cur_loss
)
policy_loss
.
append
(
cur_loss
)
policy_loss
=
fluid
.
layers
.
concat
(
policy_loss
)
policy_loss
=
paddle
.
concat
(
policy_loss
)
policy_loss
=
paddle
.
sum
(
policy_loss
)
policy_loss
=
paddle
.
sum
(
policy_loss
)
policy_loss
.
backward
()
policy_loss
.
backward
()
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_sentiment.py
浏览文件 @
b85af464
...
@@ -248,8 +248,8 @@ class BiGRU(fluid.dygraph.Layer):
...
@@ -248,8 +248,8 @@ class BiGRU(fluid.dygraph.Layer):
gru_backward
=
self
.
_gru_backward
(
fc_1
)
gru_backward
=
self
.
_gru_backward
(
fc_1
)
gru_forward_tanh
=
paddle
.
tanh
(
gru_forward
)
gru_forward_tanh
=
paddle
.
tanh
(
gru_forward
)
gru_backward_tanh
=
paddle
.
tanh
(
gru_backward
)
gru_backward_tanh
=
paddle
.
tanh
(
gru_backward
)
encoded_vector
=
fluid
.
layers
.
concat
(
encoded_vector
=
paddle
.
concat
(
input
=
[
gru_forward_tanh
,
gru_backward_tanh
],
axis
=
2
[
gru_forward_tanh
,
gru_backward_tanh
],
axis
=
2
)
)
encoded_vector
=
paddle
.
max
(
encoded_vector
,
axis
=
1
)
encoded_vector
=
paddle
.
max
(
encoded_vector
,
axis
=
1
)
fc_2
=
self
.
_fc2
(
encoded_vector
)
fc_2
=
self
.
_fc2
(
encoded_vector
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/transformer_dygraph_model.py
浏览文件 @
b85af464
...
@@ -146,8 +146,8 @@ class MultiHeadAttention(Layer):
...
@@ -146,8 +146,8 @@ class MultiHeadAttention(Layer):
if
cache
is
not
None
:
if
cache
is
not
None
:
cache_k
,
cache_v
=
cache
[
"k"
],
cache
[
"v"
]
cache_k
,
cache_v
=
cache
[
"k"
],
cache
[
"v"
]
k
=
layers
.
concat
([
cache_k
,
k
],
axis
=
2
)
k
=
paddle
.
concat
([
cache_k
,
k
],
axis
=
2
)
v
=
layers
.
concat
([
cache_v
,
v
],
axis
=
2
)
v
=
paddle
.
concat
([
cache_v
,
v
],
axis
=
2
)
cache
[
"k"
],
cache
[
"v"
]
=
k
,
v
cache
[
"k"
],
cache
[
"v"
]
=
k
,
v
# scale dot product attention
# scale dot product attention
product
=
paddle
.
matmul
(
x
=
q
,
y
=
k
,
transpose_y
=
True
)
product
=
paddle
.
matmul
(
x
=
q
,
y
=
k
,
transpose_y
=
True
)
...
@@ -774,7 +774,7 @@ class Transformer(Layer):
...
@@ -774,7 +774,7 @@ class Transformer(Layer):
return
res
return
res
def
mask_probs
(
probs
,
finished
,
noend_mask_tensor
):
def
mask_probs
(
probs
,
finished
,
noend_mask_tensor
):
finished
=
layers
.
cast
(
finished
,
dtype
=
probs
.
dtype
)
finished
=
paddle
.
cast
(
finished
,
dtype
=
probs
.
dtype
)
probs
=
paddle
.
multiply
(
probs
=
paddle
.
multiply
(
paddle
.
expand
(
paddle
.
expand
(
paddle
.
unsqueeze
(
finished
,
[
2
]),
paddle
.
unsqueeze
(
finished
,
[
2
]),
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/yolov3.py
浏览文件 @
b85af464
...
@@ -207,7 +207,7 @@ class Upsample(fluid.dygraph.Layer):
...
@@ -207,7 +207,7 @@ class Upsample(fluid.dygraph.Layer):
shape_nchw
=
paddle
.
shape
(
inputs
)
shape_nchw
=
paddle
.
shape
(
inputs
)
shape_hw
=
paddle
.
slice
(
shape_nchw
,
axes
=
[
0
],
starts
=
[
2
],
ends
=
[
4
])
shape_hw
=
paddle
.
slice
(
shape_nchw
,
axes
=
[
0
],
starts
=
[
2
],
ends
=
[
4
])
shape_hw
.
stop_gradient
=
True
shape_hw
.
stop_gradient
=
True
in_shape
=
fluid
.
layers
.
cast
(
shape_hw
,
dtype
=
'int32'
)
in_shape
=
paddle
.
cast
(
shape_hw
,
dtype
=
'int32'
)
out_shape
=
in_shape
*
self
.
scale
out_shape
=
in_shape
*
self
.
scale
out_shape
.
stop_gradient
=
True
out_shape
.
stop_gradient
=
True
...
@@ -295,9 +295,7 @@ class YOLOv3(fluid.dygraph.Layer):
...
@@ -295,9 +295,7 @@ class YOLOv3(fluid.dygraph.Layer):
blocks
=
self
.
block
(
inputs
)
blocks
=
self
.
block
(
inputs
)
for
i
,
block
in
enumerate
(
blocks
):
for
i
,
block
in
enumerate
(
blocks
):
if
i
>
0
:
if
i
>
0
:
block
=
fluid
.
layers
.
concat
(
block
=
paddle
.
concat
([
route
,
block
],
axis
=
1
)
# noqa: F821
input
=
[
route
,
block
],
axis
=
1
# noqa: F821
)
route
,
tip
=
self
.
yolo_blocks
[
i
](
block
)
route
,
tip
=
self
.
yolo_blocks
[
i
](
block
)
block_out
=
self
.
block_outputs
[
i
](
tip
)
block_out
=
self
.
block_outputs
[
i
](
tip
)
self
.
outputs
.
append
(
block_out
)
self
.
outputs
.
append
(
block_out
)
...
@@ -349,8 +347,8 @@ class YOLOv3(fluid.dygraph.Layer):
...
@@ -349,8 +347,8 @@ class YOLOv3(fluid.dygraph.Layer):
if
not
self
.
is_train
:
if
not
self
.
is_train
:
# get pred
# get pred
yolo_boxes
=
fluid
.
layers
.
concat
(
self
.
boxes
,
axis
=
1
)
yolo_boxes
=
paddle
.
concat
(
self
.
boxes
,
axis
=
1
)
yolo_scores
=
fluid
.
layers
.
concat
(
self
.
scores
,
axis
=
2
)
yolo_scores
=
paddle
.
concat
(
self
.
scores
,
axis
=
2
)
pred
=
_legacy_C_ops
.
multiclass_nms
(
pred
=
_legacy_C_ops
.
multiclass_nms
(
bboxes
=
yolo_boxes
,
bboxes
=
yolo_boxes
,
...
...
python/paddle/fluid/tests/unittests/fleet_heter_ps_training.py
浏览文件 @
b85af464
...
@@ -107,8 +107,8 @@ def net(batch_size=4, lr=0.01):
...
@@ -107,8 +107,8 @@ def net(batch_size=4, lr=0.01):
)
)
dnn_out
=
fc
dnn_out
=
fc
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"int64"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"int64"
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/fleet_ps_training.py
浏览文件 @
b85af464
...
@@ -24,10 +24,10 @@ from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import (
...
@@ -24,10 +24,10 @@ from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import (
input_x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
-
1
,
32
],
dtype
=
'float32'
)
input_x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
-
1
,
32
],
dtype
=
'float32'
)
input_y
=
paddle
.
static
.
data
(
name
=
"y"
,
shape
=
[
-
1
,
1
],
dtype
=
'int64'
)
input_y
=
paddle
.
static
.
data
(
name
=
"y"
,
shape
=
[
-
1
,
1
],
dtype
=
'int64'
)
input_y
=
fluid
.
layers
.
cast
(
input_y
,
dtype
=
"float32"
)
input_y
=
paddle
.
cast
(
input_y
,
dtype
=
"float32"
)
with
fluid
.
device_guard
(
"gpu"
):
with
fluid
.
device_guard
(
"gpu"
):
input_y
=
fluid
.
layers
.
cast
(
input_y
,
dtype
=
"int64"
)
input_y
=
paddle
.
cast
(
input_y
,
dtype
=
"int64"
)
cost
=
mlp
(
input_x
,
input_y
)
cost
=
mlp
(
input_x
,
input_y
)
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.01
)
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.01
)
...
...
python/paddle/fluid/tests/unittests/ipu/test_arg_max_op_ipu.py
浏览文件 @
b85af464
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
x
=
paddle
.
static
.
data
(
x
=
paddle
.
static
.
data
(
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
)
)
out
=
paddle
.
fluid
.
layers
.
argmax
(
x
,
**
self
.
attrs
)
out
=
paddle
.
argmax
(
x
,
**
self
.
attrs
)
self
.
fetch_list
=
[
out
.
name
]
self
.
fetch_list
=
[
out
.
name
]
def
run_model
(
self
,
exec_mode
):
def
run_model
(
self
,
exec_mode
):
...
...
python/paddle/fluid/tests/unittests/ipu/test_arg_min_op_ipu.py
浏览文件 @
b85af464
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
x
=
paddle
.
static
.
data
(
x
=
paddle
.
static
.
data
(
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
)
)
out
=
paddle
.
fluid
.
layers
.
argmin
(
x
,
**
self
.
attrs
)
out
=
paddle
.
argmin
(
x
,
**
self
.
attrs
)
self
.
fetch_list
=
[
out
.
name
]
self
.
fetch_list
=
[
out
.
name
]
def
run_model
(
self
,
exec_mode
):
def
run_model
(
self
,
exec_mode
):
...
...
python/paddle/fluid/tests/unittests/ipu/test_concat_op_ipu.py
浏览文件 @
b85af464
...
@@ -56,7 +56,7 @@ class TestBase(IPUOpTest):
...
@@ -56,7 +56,7 @@ class TestBase(IPUOpTest):
y
=
paddle
.
static
.
data
(
y
=
paddle
.
static
.
data
(
name
=
self
.
feed_list
[
1
],
shape
=
self
.
feed_shape
[
1
],
dtype
=
'float32'
name
=
self
.
feed_list
[
1
],
shape
=
self
.
feed_shape
[
1
],
dtype
=
'float32'
)
)
out
=
paddle
.
fluid
.
layers
.
concat
([
x
,
y
],
**
self
.
attrs
)
out
=
paddle
.
concat
([
x
,
y
],
**
self
.
attrs
)
self
.
fetch_list
=
[
out
.
name
]
self
.
fetch_list
=
[
out
.
name
]
def
run_model
(
self
,
exec_mode
):
def
run_model
(
self
,
exec_mode
):
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_slice_plugin.py
浏览文件 @
b85af464
...
@@ -115,7 +115,7 @@ class SlicePluginTRTTestInt32(SlicePluginTRTTest):
...
@@ -115,7 +115,7 @@ class SlicePluginTRTTestInt32(SlicePluginTRTTest):
starts
=
self
.
params_starts
starts
=
self
.
params_starts
ends
=
self
.
params_ends
ends
=
self
.
params_ends
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
cast_out
=
fluid
.
layers
.
cast
(
slice_out
,
'float32'
)
cast_out
=
paddle
.
cast
(
slice_out
,
'float32'
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
self
.
feeds
=
{
self
.
feeds
=
{
...
@@ -140,7 +140,7 @@ class StaticSlicePluginTRTTestInt32(SlicePluginTRTTest):
...
@@ -140,7 +140,7 @@ class StaticSlicePluginTRTTestInt32(SlicePluginTRTTest):
starts
=
self
.
params_starts
starts
=
self
.
params_starts
ends
=
self
.
params_ends
ends
=
self
.
params_ends
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
cast_out
=
fluid
.
layers
.
cast
(
slice_out
,
'float32'
)
cast_out
=
paddle
.
cast
(
slice_out
,
'float32'
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
self
.
feeds
=
{
self
.
feeds
=
{
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_subgraph_pass.py
浏览文件 @
b85af464
...
@@ -61,7 +61,7 @@ class TensorRTSubgraphPassConcatTest(InferencePassTest):
...
@@ -61,7 +61,7 @@ class TensorRTSubgraphPassConcatTest(InferencePassTest):
data2
=
fluid
.
data
(
data2
=
fluid
.
data
(
name
=
"data2"
,
shape
=
[
-
1
,
3
,
64
,
64
],
dtype
=
"float32"
name
=
"data2"
,
shape
=
[
-
1
,
3
,
64
,
64
],
dtype
=
"float32"
)
)
concat_out
=
fluid
.
layers
.
concat
([
data1
,
data2
],
axis
=
2
)
concat_out
=
paddle
.
concat
([
data1
,
data2
],
axis
=
2
)
out
=
nn
.
batch_norm
(
concat_out
,
is_test
=
True
)
out
=
nn
.
batch_norm
(
concat_out
,
is_test
=
True
)
self
.
feeds
=
{
self
.
feeds
=
{
"data1"
:
np
.
random
.
random
([
1
,
3
,
64
,
64
]).
astype
(
"float32"
),
"data1"
:
np
.
random
.
random
([
1
,
3
,
64
,
64
]).
astype
(
"float32"
),
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_transpose_flatten_concat_fuse_pass.py
浏览文件 @
b85af464
...
@@ -38,7 +38,7 @@ class TransposeFlattenConcatFusePassTRTTest(InferencePassTest):
...
@@ -38,7 +38,7 @@ class TransposeFlattenConcatFusePassTRTTest(InferencePassTest):
flatt1
=
paddle
.
flatten
(
trans1
,
1
,
-
1
)
flatt1
=
paddle
.
flatten
(
trans1
,
1
,
-
1
)
flatt2
=
paddle
.
flatten
(
trans2
,
1
,
-
1
)
flatt2
=
paddle
.
flatten
(
trans2
,
1
,
-
1
)
concat_out
=
fluid
.
layers
.
concat
([
flatt1
,
flatt2
],
axis
=
1
)
concat_out
=
paddle
.
concat
([
flatt1
,
flatt2
],
axis
=
1
)
# There is no parameters for above structure.
# There is no parameters for above structure.
# Hence, append a batch_norm to avoid failure caused by load_combined.
# Hence, append a batch_norm to avoid failure caused by load_combined.
reshape_out
=
paddle
.
reshape
(
concat_out
,
[
-
1
,
0
,
1
,
1
])
reshape_out
=
paddle
.
reshape
(
concat_out
,
[
-
1
,
0
,
1
,
1
])
...
...
python/paddle/fluid/tests/unittests/ir/test_ir_fusion_group_pass.py
浏览文件 @
b85af464
...
@@ -113,7 +113,7 @@ class FusionGroupPassInplaceTest(FusionGroupPassTest):
...
@@ -113,7 +113,7 @@ class FusionGroupPassInplaceTest(FusionGroupPassTest):
# subgraph with 3 op node
# subgraph with 3 op node
tmp_0
=
self
.
feed_vars
[
0
]
-
self
.
feed_vars
[
1
]
tmp_0
=
self
.
feed_vars
[
0
]
-
self
.
feed_vars
[
1
]
tmp_1
=
tmp_0
*
self
.
feed_vars
[
2
]
tmp_1
=
tmp_0
*
self
.
feed_vars
[
2
]
tmp_2
=
layers
.
assign
(
tmp_1
,
output
=
tmp_0
)
tmp_2
=
paddle
.
assign
(
tmp_1
,
output
=
tmp_0
)
tmp_3
=
paddle
.
matmul
(
tmp_2
,
self
.
feed_vars
[
3
])
tmp_3
=
paddle
.
matmul
(
tmp_2
,
self
.
feed_vars
[
3
])
self
.
num_fused_ops
=
1
self
.
num_fused_ops
=
1
...
@@ -138,17 +138,17 @@ class FusionGroupPassTestCastAndFP16(FusionGroupPassTest):
...
@@ -138,17 +138,17 @@ class FusionGroupPassTestCastAndFP16(FusionGroupPassTest):
# subgraph with 2 op nodes
# subgraph with 2 op nodes
tmp_0
=
self
.
feed_vars
[
0
]
*
self
.
feed_vars
[
1
]
tmp_0
=
self
.
feed_vars
[
0
]
*
self
.
feed_vars
[
1
]
tmp_1
=
layers
.
cast
(
tmp_0
,
dtype
=
"float16"
)
tmp_1
=
paddle
.
cast
(
tmp_0
,
dtype
=
"float16"
)
zero
=
layers
.
fill_constant
(
shape
=
[
128
],
dtype
=
"float16"
,
value
=
0
)
zero
=
layers
.
fill_constant
(
shape
=
[
128
],
dtype
=
"float16"
,
value
=
0
)
# TODO(xreki): fix precision problem when using softmax of float16.
# TODO(xreki): fix precision problem when using softmax of float16.
# tmp_2 = layers.softmax(tmp_1)
# tmp_2 = layers.softmax(tmp_1)
tmp_2
=
paddle
.
add
(
tmp_1
,
zero
)
tmp_2
=
paddle
.
add
(
tmp_1
,
zero
)
tmp_3
=
paddle
.
matmul
(
tmp_0
,
self
.
feed_vars
[
2
])
tmp_3
=
paddle
.
matmul
(
tmp_0
,
self
.
feed_vars
[
2
])
# subgraph with 4 op nodes
# subgraph with 4 op nodes
tmp_3
=
layers
.
cast
(
tmp_2
,
dtype
=
"float16"
)
tmp_3
=
paddle
.
cast
(
tmp_2
,
dtype
=
"float16"
)
tmp_4
=
paddle
.
nn
.
functional
.
relu
(
tmp_1
+
tmp_3
)
tmp_4
=
paddle
.
nn
.
functional
.
relu
(
tmp_1
+
tmp_3
)
tmp_5
=
layers
.
cast
(
tmp_4
,
dtype
=
dtype
)
tmp_5
=
paddle
.
cast
(
tmp_4
,
dtype
=
dtype
)
tmp_3
=
layers
.
cast
(
tmp_2
,
dtype
=
dtype
)
tmp_3
=
paddle
.
cast
(
tmp_2
,
dtype
=
dtype
)
self
.
append_gradients
(
tmp_5
)
self
.
append_gradients
(
tmp_5
)
...
@@ -185,8 +185,8 @@ class FusionGroupPassCastTest(FusionGroupPassTest):
...
@@ -185,8 +185,8 @@ class FusionGroupPassCastTest(FusionGroupPassTest):
self
.
feed_vars
=
self
.
_prepare_feed_vars
([
2
,
2
],
dtype
,
2
)
self
.
feed_vars
=
self
.
_prepare_feed_vars
([
2
,
2
],
dtype
,
2
)
tmp_0
=
paddle
.
add
(
self
.
feed_vars
[
0
],
self
.
feed_vars
[
1
])
tmp_0
=
paddle
.
add
(
self
.
feed_vars
[
0
],
self
.
feed_vars
[
1
])
tmp_1
=
layers
.
cast
(
tmp_0
,
dtype
=
"float64"
)
tmp_1
=
paddle
.
cast
(
tmp_0
,
dtype
=
"float64"
)
tmp_2
=
layers
.
cast
(
tmp_1
,
dtype
=
"float32"
)
tmp_2
=
paddle
.
cast
(
tmp_1
,
dtype
=
"float32"
)
self
.
append_gradients
(
tmp_2
)
self
.
append_gradients
(
tmp_2
)
...
...
python/paddle/fluid/tests/unittests/mlu/sync_batch_norm_op_mlu.py
浏览文件 @
b85af464
...
@@ -84,7 +84,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
...
@@ -84,7 +84,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
use_cudnn
=
use_cudnn
,
use_cudnn
=
use_cudnn
,
)
)
if
self
.
bn_dtype
==
np
.
float16
:
if
self
.
bn_dtype
==
np
.
float16
:
conv
=
fluid
.
layers
.
cast
(
conv
,
'float16'
)
conv
=
paddle
.
cast
(
conv
,
'float16'
)
bn
=
paddle
.
static
.
nn
.
batch_norm
(
bn
=
paddle
.
static
.
nn
.
batch_norm
(
conv
,
conv
,
param_attr
=
fluid
.
ParamAttr
(
name
=
'bn_scale'
),
param_attr
=
fluid
.
ParamAttr
(
name
=
'bn_scale'
),
...
@@ -95,7 +95,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
...
@@ -95,7 +95,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
is_test
=
only_forward
,
is_test
=
only_forward
,
)
)
if
self
.
bn_dtype
==
np
.
float16
:
if
self
.
bn_dtype
==
np
.
float16
:
bn
=
fluid
.
layers
.
cast
(
bn
,
'float32'
)
bn
=
paddle
.
cast
(
bn
,
'float32'
)
sigmoid
=
paddle
.
nn
.
functional
.
sigmoid
(
bn
)
sigmoid
=
paddle
.
nn
.
functional
.
sigmoid
(
bn
)
out
=
paddle
.
sum
(
sigmoid
)
out
=
paddle
.
sum
(
sigmoid
)
# if not sync_bn:
# if not sync_bn:
...
...
python/paddle/fluid/tests/unittests/mlu/test_cast_op_mlu.py
浏览文件 @
b85af464
...
@@ -139,7 +139,7 @@ class TestCastOpError(unittest.TestCase):
...
@@ -139,7 +139,7 @@ class TestCastOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
MLUPlace
(
0
)
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
MLUPlace
(
0
)
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
cast
,
x1
,
'int32'
)
self
.
assertRaises
(
TypeError
,
paddle
.
cast
,
x1
,
'int32'
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
python/paddle/fluid/tests/unittests/mlu/test_one_hot_v2_op_mlu.py
浏览文件 @
b85af464
...
@@ -187,7 +187,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -187,7 +187,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
depth
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
depth
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/mlu/test_where_op_mlu.py
浏览文件 @
b85af464
...
@@ -344,7 +344,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
...
@@ -344,7 +344,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
y
=
paddle
.
where
(
x
)
y
=
paddle
.
where
(
x
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
2
)
self
.
assertEqual
(
len
(
y
),
2
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
@@ -357,7 +357,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
...
@@ -357,7 +357,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
y
=
paddle
.
where
(
x
)
y
=
paddle
.
where
(
x
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
1
)
self
.
assertEqual
(
len
(
y
),
1
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
...
python/paddle/fluid/tests/unittests/npu/test_assign_value_op_npu.py
浏览文件 @
b85af464
...
@@ -94,7 +94,7 @@ class TestAssignApi(unittest.TestCase):
...
@@ -94,7 +94,7 @@ class TestAssignApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
layers
.
assign
(
input
=
self
.
value
,
output
=
x
)
paddle
.
assign
(
self
.
value
,
output
=
x
)
exe
=
fluid
.
Executor
(
self
.
place
)
exe
=
fluid
.
Executor
(
self
.
place
)
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
...
...
python/paddle/fluid/tests/unittests/npu/test_concat_op_npu.py
浏览文件 @
b85af464
...
@@ -169,7 +169,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -169,7 +169,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
if
use_fluid_api
:
if
use_fluid_api
:
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
...
@@ -178,7 +178,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -178,7 +178,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
for
i
in
range
(
self
.
iter_num
):
for
i
in
range
(
self
.
iter_num
):
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
self
.
out_var
=
fluid
.
layers
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
self
.
out_var
=
paddle
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
else
:
else
:
self
.
program
=
paddle
.
static
.
Program
()
self
.
program
=
paddle
.
static
.
Program
()
with
paddle
.
static
.
program_guard
(
self
.
program
):
with
paddle
.
static
.
program_guard
(
self
.
program
):
...
...
python/paddle/fluid/tests/unittests/npu/test_one_hot_v2_op_npu.py
浏览文件 @
b85af464
...
@@ -210,7 +210,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -210,7 +210,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
depth
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
depth
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/npu/test_stack_op_npu.py
浏览文件 @
b85af464
...
@@ -137,7 +137,7 @@ class TestStackAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -137,7 +137,7 @@ class TestStackAPIWithLoDTensorArray(unittest.TestCase):
def
set_program
(
self
):
def
set_program
(
self
):
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
...
@@ -175,7 +175,7 @@ class TestTensorStackAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -175,7 +175,7 @@ class TestTensorStackAPIWithLoDTensorArray(unittest.TestCase):
def
set_program
(
self
):
def
set_program
(
self
):
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
...
...
python/paddle/fluid/tests/unittests/npu/test_while_op_npu.py
浏览文件 @
b85af464
...
@@ -38,7 +38,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -38,7 +38,7 @@ class TestWhileOp(unittest.TestCase):
)
)
# fill_constant npu op doesn't support int64
# fill_constant npu op doesn't support int64
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
cast
(
i
,
'int64'
)
i
=
paddle
.
cast
(
i
,
'int64'
)
i
.
stop_gradient
=
True
i
.
stop_gradient
=
True
init
=
layers
.
zeros
(
shape
=
[
10
],
dtype
=
'float32'
)
init
=
layers
.
zeros
(
shape
=
[
10
],
dtype
=
'float32'
)
mem_array
=
paddle
.
tensor
.
array_write
(
x
=
init
,
i
=
i
)
mem_array
=
paddle
.
tensor
.
array_write
(
x
=
init
,
i
=
i
)
...
@@ -48,28 +48,28 @@ class TestWhileOp(unittest.TestCase):
...
@@ -48,28 +48,28 @@ class TestWhileOp(unittest.TestCase):
i
=
paddle
.
increment
(
i
)
i
=
paddle
.
increment
(
i
)
paddle
.
tensor
.
array_write
(
d2
,
i
,
array
=
data_array
)
paddle
.
tensor
.
array_write
(
d2
,
i
,
array
=
data_array
)
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
cast
(
i
,
'int64'
)
i
=
paddle
.
cast
(
i
,
'int64'
)
i
.
stop_gradient
=
True
i
.
stop_gradient
=
True
array_len
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
5
)
array_len
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
5
)
array_len
=
layers
.
cast
(
array_len
,
'int64'
)
array_len
=
paddle
.
cast
(
array_len
,
'int64'
)
array_len
.
stop_gradient
=
True
array_len
.
stop_gradient
=
True
cond
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond
=
layers
.
cast
(
cond
,
'bool'
)
cond
=
paddle
.
cast
(
cond
,
'bool'
)
j
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
1
)
j
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
1
)
j
=
layers
.
cast
(
j
,
'int64'
)
j
=
paddle
.
cast
(
j
,
'int64'
)
j
.
stop_gradient
=
True
j
.
stop_gradient
=
True
array_len2
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
3
)
array_len2
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
3
)
array_len2
=
layers
.
cast
(
array_len2
,
'int64'
)
array_len2
=
paddle
.
cast
(
array_len2
,
'int64'
)
array_len2
.
stop_gradient
=
True
array_len2
.
stop_gradient
=
True
cond2
=
paddle
.
logical_or
(
x
=
j
,
y
=
array_len2
)
cond2
=
paddle
.
logical_or
(
x
=
j
,
y
=
array_len2
)
cond2
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond2
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond2
=
layers
.
cast
(
cond2
,
'bool'
)
cond2
=
paddle
.
cast
(
cond2
,
'bool'
)
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
while_op2
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond2
)
while_op2
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond2
)
with
while_op
.
block
():
with
while_op
.
block
():
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
result
=
layers
.
sums
(
input
=
[
d
,
prev
])
result
=
paddle
.
add_n
(
[
d
,
prev
])
i
=
paddle
.
increment
(
x
=
i
)
i
=
paddle
.
increment
(
x
=
i
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
...
@@ -78,7 +78,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -78,7 +78,7 @@ class TestWhileOp(unittest.TestCase):
with
while_op2
.
block
():
with
while_op2
.
block
():
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
result2
=
layers
.
sums
(
input
=
[
d2
,
prev2
])
result2
=
paddle
.
add_n
(
[
d2
,
prev2
])
j
=
paddle
.
increment
(
x
=
j
)
j
=
paddle
.
increment
(
x
=
j
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
...
...
python/paddle/fluid/tests/unittests/sequence/test_sequence_pad_op.py
浏览文件 @
b85af464
...
@@ -17,11 +17,10 @@ import unittest
...
@@ -17,11 +17,10 @@ import unittest
import
numpy
as
np
import
numpy
as
np
import
paddle
sys
.
path
.
append
(
"../"
)
sys
.
path
.
append
(
"../"
)
from
op_test
import
OpTest
from
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.core
as
core
import
paddle.fluid.core
as
core
...
@@ -156,9 +155,8 @@ class TestSequencePadOpError(unittest.TestCase):
...
@@ -156,9 +155,8 @@ class TestSequencePadOpError(unittest.TestCase):
def
test_x_variable
():
def
test_x_variable
():
# the input x type must be Variable
# the input x type must be Variable
x
=
np
.
random
.
random
((
2
,
4
)).
astype
(
"float32"
)
x
=
np
.
random
.
random
((
2
,
4
)).
astype
(
"float32"
)
pad_value
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
0.0
],
dtype
=
np
.
float32
)
pad_value
=
paddle
.
assign
(
np
.
array
([
0.0
],
dtype
=
np
.
float32
))
)
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x
,
pad_value
=
pad_value
)
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x
,
pad_value
=
pad_value
)
self
.
assertRaises
(
TypeError
,
test_x_variable
)
self
.
assertRaises
(
TypeError
,
test_x_variable
)
...
@@ -178,9 +176,8 @@ class TestSequencePadOpError(unittest.TestCase):
...
@@ -178,9 +176,8 @@ class TestSequencePadOpError(unittest.TestCase):
x2
=
paddle
.
static
.
data
(
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
-
1
,
10
,
5
],
dtype
=
'int16'
,
lod_level
=
1
name
=
'x2'
,
shape
=
[
-
1
,
10
,
5
],
dtype
=
'int16'
,
lod_level
=
1
)
)
pad_value2
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
0.0
],
dtype
=
np
.
int32
)
pad_value2
=
paddle
.
assign
(
np
.
array
([
0.0
],
dtype
=
np
.
int32
))
)
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x2
,
pad_value
=
pad_value2
x
=
x2
,
pad_value
=
pad_value2
)
)
...
@@ -189,7 +186,8 @@ class TestSequencePadOpError(unittest.TestCase):
...
@@ -189,7 +186,8 @@ class TestSequencePadOpError(unittest.TestCase):
def
test_length_dtype
(
self
):
def
test_length_dtype
(
self
):
x
=
fluid
.
data
(
name
=
'x'
,
shape
=
[
10
,
5
],
dtype
=
'float32'
,
lod_level
=
1
)
x
=
fluid
.
data
(
name
=
'x'
,
shape
=
[
10
,
5
],
dtype
=
'float32'
,
lod_level
=
1
)
pad_value
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
0.0
],
dtype
=
np
.
float32
))
pad_value
=
paddle
.
assign
(
np
.
array
([
0.0
],
dtype
=
np
.
float32
))
out
,
length
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
out
,
length
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x
,
pad_value
=
pad_value
x
=
x
,
pad_value
=
pad_value
)
)
...
...
python/paddle/fluid/tests/unittests/standalone_executor/test_standalone_multiply_write.py
浏览文件 @
b85af464
...
@@ -35,8 +35,8 @@ class TestMultiplyWrite(TestCompatibility):
...
@@ -35,8 +35,8 @@ class TestMultiplyWrite(TestCompatibility):
inp1
=
paddle
.
full
((
1
,),
2
)
inp1
=
paddle
.
full
((
1
,),
2
)
inp2
=
paddle
.
full
((
1
,),
3
)
inp2
=
paddle
.
full
((
1
,),
3
)
paddle
.
fluid
.
layers
.
assign
(
inp1
,
out
)
paddle
.
assign
(
inp1
,
out
)
paddle
.
fluid
.
layers
.
assign
(
inp2
,
out
)
paddle
.
assign
(
inp2
,
out
)
return
main_program
,
startup_program
,
out
return
main_program
,
startup_program
,
out
def
setUp
(
self
):
def
setUp
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_array_read_write_op.py
浏览文件 @
b85af464
...
@@ -47,13 +47,13 @@ def _test_read_write(x):
...
@@ -47,13 +47,13 @@ def _test_read_write(x):
mean_a1
=
paddle
.
mean
(
a1
)
mean_a1
=
paddle
.
mean
(
a1
)
mean_a2
=
paddle
.
mean
(
a2
)
mean_a2
=
paddle
.
mean
(
a2
)
a_sum
=
layers
.
sums
(
input
=
[
mean_a0
,
mean_a1
,
mean_a2
])
a_sum
=
paddle
.
add_n
(
[
mean_a0
,
mean_a1
,
mean_a2
])
mean_x0
=
paddle
.
mean
(
x
[
0
])
mean_x0
=
paddle
.
mean
(
x
[
0
])
mean_x1
=
paddle
.
mean
(
x
[
1
])
mean_x1
=
paddle
.
mean
(
x
[
1
])
mean_x2
=
paddle
.
mean
(
x
[
2
])
mean_x2
=
paddle
.
mean
(
x
[
2
])
x_sum
=
layers
.
sums
(
input
=
[
mean_x0
,
mean_x1
,
mean_x2
])
x_sum
=
paddle
.
add_n
(
[
mean_x0
,
mean_x1
,
mean_x2
])
return
a_sum
,
x_sum
return
a_sum
,
x_sum
...
@@ -81,7 +81,7 @@ class TestArrayReadWrite(unittest.TestCase):
...
@@ -81,7 +81,7 @@ class TestArrayReadWrite(unittest.TestCase):
)
)
self
.
assertEqual
(
outs
[
0
],
outs
[
1
])
self
.
assertEqual
(
outs
[
0
],
outs
[
1
])
total_sum
=
layers
.
sums
(
input
=
[
a_sum
,
x_sum
])
total_sum
=
paddle
.
add_n
(
[
a_sum
,
x_sum
])
total_sum_scaled
=
paddle
.
scale
(
x
=
total_sum
,
scale
=
1
/
6.0
)
total_sum_scaled
=
paddle
.
scale
(
x
=
total_sum
,
scale
=
1
/
6.0
)
append_backward
(
total_sum_scaled
)
append_backward
(
total_sum_scaled
)
...
@@ -116,9 +116,7 @@ class TestArrayReadWrite(unittest.TestCase):
...
@@ -116,9 +116,7 @@ class TestArrayReadWrite(unittest.TestCase):
a_sum_dygraph
,
x_sum_dygraph
=
_test_read_write
(
x_dygraph
)
a_sum_dygraph
,
x_sum_dygraph
=
_test_read_write
(
x_dygraph
)
self
.
assertEqual
(
a_sum_dygraph
,
x_sum_dygraph
)
self
.
assertEqual
(
a_sum_dygraph
,
x_sum_dygraph
)
total_sum_dygraph
=
layers
.
sums
(
total_sum_dygraph
=
paddle
.
add_n
([
a_sum_dygraph
,
x_sum_dygraph
])
input
=
[
a_sum_dygraph
,
x_sum_dygraph
]
)
total_sum_scaled_dygraph
=
paddle
.
scale
(
total_sum_scaled_dygraph
=
paddle
.
scale
(
x
=
total_sum_dygraph
,
scale
=
1
/
6.0
x
=
total_sum_dygraph
,
scale
=
1
/
6.0
)
)
...
...
python/paddle/fluid/tests/unittests/test_assign_op.py
浏览文件 @
b85af464
...
@@ -78,7 +78,7 @@ class TestAssignOpWithLoDTensorArray(unittest.TestCase):
...
@@ -78,7 +78,7 @@ class TestAssignOpWithLoDTensorArray(unittest.TestCase):
z
=
paddle
.
add
(
x
=
x
,
y
=
y
)
z
=
paddle
.
add
(
x
=
x
,
y
=
y
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
init_array
=
paddle
.
tensor
.
array_write
(
x
=
z
,
i
=
i
)
init_array
=
paddle
.
tensor
.
array_write
(
x
=
z
,
i
=
i
)
array
=
fluid
.
layers
.
assign
(
init_array
)
array
=
paddle
.
assign
(
init_array
)
sums
=
paddle
.
tensor
.
array_read
(
array
=
init_array
,
i
=
i
)
sums
=
paddle
.
tensor
.
array_read
(
array
=
init_array
,
i
=
i
)
mean
=
paddle
.
mean
(
sums
)
mean
=
paddle
.
mean
(
sums
)
append_backward
(
mean
)
append_backward
(
mean
)
...
@@ -110,10 +110,10 @@ class TestAssignOpError(unittest.TestCase):
...
@@ -110,10 +110,10 @@ class TestAssignOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
assign
,
x1
)
self
.
assertRaises
(
TypeError
,
paddle
.
assign
,
x1
)
# When the type of input is numpy.ndarray, the dtype of input must be float32, int32.
# When the type of input is numpy.ndarray, the dtype of input must be float32, int32.
x2
=
np
.
array
([[
2.5
,
2.5
]],
dtype
=
'uint8'
)
x2
=
np
.
array
([[
2.5
,
2.5
]],
dtype
=
'uint8'
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
assign
,
x2
)
self
.
assertRaises
(
TypeError
,
paddle
.
assign
,
x2
)
paddle
.
disable_static
()
paddle
.
disable_static
()
...
@@ -252,7 +252,7 @@ class TestAssignOpErrorApi(unittest.TestCase):
...
@@ -252,7 +252,7 @@ class TestAssignOpErrorApi(unittest.TestCase):
class
TestAssignDoubleGradCheck
(
unittest
.
TestCase
):
class
TestAssignDoubleGradCheck
(
unittest
.
TestCase
):
def
assign_wrapper
(
self
,
x
):
def
assign_wrapper
(
self
,
x
):
return
paddle
.
fluid
.
layers
.
assign
(
x
[
0
])
return
paddle
.
assign
(
x
[
0
])
@
prog_scope
()
@
prog_scope
()
def
func
(
self
,
place
):
def
func
(
self
,
place
):
...
@@ -262,7 +262,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
...
@@ -262,7 +262,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
.
persistable
=
True
data
.
persistable
=
True
out
=
paddle
.
fluid
.
layers
.
assign
(
data
)
out
=
paddle
.
assign
(
data
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
gradient_checker
.
double_grad_check
(
gradient_checker
.
double_grad_check
(
...
@@ -283,7 +283,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
...
@@ -283,7 +283,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
class
TestAssignTripleGradCheck
(
unittest
.
TestCase
):
class
TestAssignTripleGradCheck
(
unittest
.
TestCase
):
def
assign_wrapper
(
self
,
x
):
def
assign_wrapper
(
self
,
x
):
return
paddle
.
fluid
.
layers
.
assign
(
x
[
0
])
return
paddle
.
assign
(
x
[
0
])
@
prog_scope
()
@
prog_scope
()
def
func
(
self
,
place
):
def
func
(
self
,
place
):
...
@@ -293,7 +293,7 @@ class TestAssignTripleGradCheck(unittest.TestCase):
...
@@ -293,7 +293,7 @@ class TestAssignTripleGradCheck(unittest.TestCase):
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
.
persistable
=
True
data
.
persistable
=
True
out
=
paddle
.
fluid
.
layers
.
assign
(
data
)
out
=
paddle
.
assign
(
data
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
gradient_checker
.
triple_grad_check
(
gradient_checker
.
triple_grad_check
(
...
...
python/paddle/fluid/tests/unittests/test_assign_value_op.py
浏览文件 @
b85af464
...
@@ -20,7 +20,6 @@ import op_test
...
@@ -20,7 +20,6 @@ import op_test
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.framework
as
framework
import
paddle.fluid.framework
as
framework
import
paddle.fluid.layers
as
layers
paddle
.
enable_static
()
paddle
.
enable_static
()
...
@@ -84,7 +83,7 @@ class TestAssignApi(unittest.TestCase):
...
@@ -84,7 +83,7 @@ class TestAssignApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
layers
.
assign
(
input
=
self
.
value
,
output
=
x
)
paddle
.
assign
(
self
.
value
,
output
=
x
)
exe
=
fluid
.
Executor
(
self
.
place
)
exe
=
fluid
.
Executor
(
self
.
place
)
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
...
...
python/paddle/fluid/tests/unittests/test_cast_op.py
浏览文件 @
b85af464
...
@@ -114,7 +114,7 @@ class TestCastOpError(unittest.TestCase):
...
@@ -114,7 +114,7 @@ class TestCastOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
cast
,
x1
,
'int32'
)
self
.
assertRaises
(
TypeError
,
paddle
.
cast
,
x1
,
'int32'
)
class
TestCastOpEager
(
unittest
.
TestCase
):
class
TestCastOpEager
(
unittest
.
TestCase
):
...
...
python/paddle/fluid/tests/unittests/test_communicator_geo.py
浏览文件 @
b85af464
...
@@ -49,7 +49,8 @@ class TestCommunicatorGeoEnd2End(unittest.TestCase):
...
@@ -49,7 +49,8 @@ class TestCommunicatorGeoEnd2End(unittest.TestCase):
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
emb
,
pool_type
=
"sum"
input
=
emb
,
pool_type
=
"sum"
)
)
z
=
fluid
.
layers
.
concat
(
input
=
[
x
,
pool
],
axis
=
1
)
z
=
paddle
.
concat
([
x
,
pool
],
axis
=
1
)
y_predict
=
paddle
.
static
.
nn
.
fc
(
x
=
z
,
size
=
1
)
y_predict
=
paddle
.
static
.
nn
.
fc
(
x
=
z
,
size
=
1
)
y
=
paddle
.
static
.
data
(
name
=
'y'
,
shape
=
[
-
1
,
1
],
dtype
=
'float32'
)
y
=
paddle
.
static
.
data
(
name
=
'y'
,
shape
=
[
-
1
,
1
],
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
square_error_cost
(
input
=
y_predict
,
label
=
y
)
cost
=
paddle
.
nn
.
functional
.
square_error_cost
(
input
=
y_predict
,
label
=
y
)
...
...
python/paddle/fluid/tests/unittests/test_compare_op.py
浏览文件 @
b85af464
...
@@ -450,8 +450,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
...
@@ -450,8 +450,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
def
test_api
(
self
):
def
test_api
(
self
):
paddle
.
enable_static
()
paddle
.
enable_static
()
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
label
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
label
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
limit
=
paddle
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
place
=
fluid
.
CPUPlace
()
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
=
fluid
.
Executor
(
place
)
...
@@ -459,8 +459,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
...
@@ -459,8 +459,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
self
.
assertEqual
((
res
==
np
.
array
([
True
,
False
])).
all
(),
True
)
self
.
assertEqual
((
res
==
np
.
array
([
True
,
False
])).
all
(),
True
)
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
label
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
label
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
place
=
fluid
.
CPUPlace
()
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
=
fluid
.
Executor
(
place
)
...
@@ -474,8 +474,8 @@ class TestCompareOpPlace(unittest.TestCase):
...
@@ -474,8 +474,8 @@ class TestCompareOpPlace(unittest.TestCase):
place
=
paddle
.
CPUPlace
()
place
=
paddle
.
CPUPlace
()
if
core
.
is_compiled_with_cuda
():
if
core
.
is_compiled_with_cuda
():
place
=
paddle
.
CUDAPlace
(
0
)
place
=
paddle
.
CUDAPlace
(
0
)
label
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
label
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
limit
=
paddle
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
out
=
paddle
.
less_than
(
label
,
limit
)
out
=
paddle
.
less_than
(
label
,
limit
)
exe
=
fluid
.
Executor
(
place
)
exe
=
fluid
.
Executor
(
place
)
(
res
,)
=
exe
.
run
(
fetch_list
=
[
out
])
(
res
,)
=
exe
.
run
(
fetch_list
=
[
out
])
...
...
python/paddle/fluid/tests/unittests/test_compare_reduce_op.py
浏览文件 @
b85af464
...
@@ -18,7 +18,6 @@ import numpy as np
...
@@ -18,7 +18,6 @@ import numpy as np
import
op_test
import
op_test
import
paddle
import
paddle
import
paddle.fluid
as
fluid
def
create_test_not_equal_class
(
op_type
,
typename
,
callback
):
def
create_test_not_equal_class
(
op_type
,
typename
,
callback
):
...
@@ -107,8 +106,8 @@ for _type_name in {'float32', 'float64', 'int32', 'int64', 'bool'}:
...
@@ -107,8 +106,8 @@ for _type_name in {'float32', 'float64', 'int32', 'int64', 'bool'}:
class
TestEqualReduceAPI
(
unittest
.
TestCase
):
class
TestEqualReduceAPI
(
unittest
.
TestCase
):
def
test_name
(
self
):
def
test_name
(
self
):
x
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
x
=
paddle
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
y
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
y
=
paddle
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
out
=
paddle
.
equal_all
(
x
,
y
,
name
=
'equal_res'
)
out
=
paddle
.
equal_all
(
x
,
y
,
name
=
'equal_res'
)
assert
'equal_res'
in
out
.
name
assert
'equal_res'
in
out
.
name
...
...
python/paddle/fluid/tests/unittests/test_concat_op.py
浏览文件 @
b85af464
...
@@ -249,8 +249,10 @@ class TestConcatOpError(unittest.TestCase):
...
@@ -249,8 +249,10 @@ class TestConcatOpError(unittest.TestCase):
def
test_errors
(
self
):
def
test_errors
(
self
):
with
program_guard
(
Program
(),
Program
()):
with
program_guard
(
Program
(),
Program
()):
# The input type of concat_op should be list.
# The input type of concat_op should be list.
x1
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'int32'
,
name
=
'x1'
)
x1
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'int32'
,
name
=
'x1'
)
fluid
.
layers
.
concat
(
x1
)
paddle
.
concat
(
x1
)
# The item in input must be Variable.
# The item in input must be Variable.
x2
=
fluid
.
create_lod_tensor
(
x2
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
...
@@ -258,24 +260,25 @@ class TestConcatOpError(unittest.TestCase):
...
@@ -258,24 +260,25 @@ class TestConcatOpError(unittest.TestCase):
x3
=
fluid
.
create_lod_tensor
(
x3
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
concat
,
[
x2
])
self
.
assertRaises
(
TypeError
,
paddle
.
concat
,
[
x2
])
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
x4
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x4
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x5
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
x5
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
concat
,
[
x4
,
x5
])
self
.
assertRaises
(
TypeError
,
paddle
.
concat
,
[
x4
,
x5
])
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
x7
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x7'
)
x7
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x7'
)
x8
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float32'
,
name
=
'x8'
)
x8
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float32'
,
name
=
'x8'
)
fluid
.
layers
.
concat
([
x6
,
x7
])
paddle
.
concat
([
x6
,
x7
])
# The type of axis in concat_op should be int or Variable.
# The type of axis in concat_op should be int or Variable.
def
test_axis_type
():
def
test_axis_type
():
fluid
.
layers
.
concat
([
x6
,
x7
],
3.2
)
paddle
.
concat
([
x6
,
x7
],
3.2
)
self
.
assertRaises
(
TypeError
,
test_axis_type
)
self
.
assertRaises
(
TypeError
,
test_axis_type
)
def
test_input_same_dtype
():
def
test_input_same_dtype
():
fluid
.
layers
.
concat
([
x7
,
x8
])
paddle
.
concat
([
x7
,
x8
])
self
.
assertRaises
(
TypeError
,
test_input_same_dtype
)
self
.
assertRaises
(
TypeError
,
test_input_same_dtype
)
...
@@ -284,7 +287,7 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -284,7 +287,7 @@ class TestConcatAPI(unittest.TestCase):
def
test_fluid_api
(
self
):
def
test_fluid_api
(
self
):
paddle
.
enable_static
()
paddle
.
enable_static
()
x_1
=
fluid
.
data
(
shape
=
[
None
,
1
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_1'
)
x_1
=
fluid
.
data
(
shape
=
[
None
,
1
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_1'
)
fluid
.
layers
.
concat
([
x_1
,
x_1
],
0
)
paddle
.
concat
([
x_1
,
x_1
],
0
)
input_2
=
np
.
random
.
random
([
2
,
1
,
4
,
5
]).
astype
(
"int32"
)
input_2
=
np
.
random
.
random
([
2
,
1
,
4
,
5
]).
astype
(
"int32"
)
input_3
=
np
.
random
.
random
([
2
,
2
,
4
,
5
]).
astype
(
"int32"
)
input_3
=
np
.
random
.
random
([
2
,
2
,
4
,
5
]).
astype
(
"int32"
)
...
@@ -292,9 +295,9 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -292,9 +295,9 @@ class TestConcatAPI(unittest.TestCase):
x_3
=
fluid
.
data
(
shape
=
[
2
,
2
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_3'
)
x_3
=
fluid
.
data
(
shape
=
[
2
,
2
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_3'
)
positive_1_int32
=
fluid
.
layers
.
fill_constant
([
1
],
"int32"
,
1
)
positive_1_int32
=
fluid
.
layers
.
fill_constant
([
1
],
"int32"
,
1
)
positive_1_int64
=
fluid
.
layers
.
fill_constant
([
1
],
"int64"
,
1
)
positive_1_int64
=
fluid
.
layers
.
fill_constant
([
1
],
"int64"
,
1
)
out_1
=
fluid
.
layers
.
concat
(
input
=
[
x_2
,
x_3
],
axis
=
1
)
out_1
=
paddle
.
concat
(
[
x_2
,
x_3
],
axis
=
1
)
out_2
=
fluid
.
layers
.
concat
(
input
=
[
x_2
,
x_3
],
axis
=
positive_1_int32
)
out_2
=
paddle
.
concat
(
[
x_2
,
x_3
],
axis
=
positive_1_int32
)
out_3
=
fluid
.
layers
.
concat
(
input
=
[
x_2
,
x_3
],
axis
=
positive_1_int64
)
out_3
=
paddle
.
concat
(
[
x_2
,
x_3
],
axis
=
positive_1_int64
)
exe
=
fluid
.
Executor
(
place
=
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
place
=
fluid
.
CPUPlace
())
[
res_1
,
res_2
,
res_3
]
=
exe
.
run
(
[
res_1
,
res_2
,
res_3
]
=
exe
.
run
(
...
@@ -344,7 +347,7 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -344,7 +347,7 @@ class TestConcatAPI(unittest.TestCase):
x1
=
paddle
.
to_tensor
(
in1
)
x1
=
paddle
.
to_tensor
(
in1
)
x2
=
paddle
.
to_tensor
(
in2
)
x2
=
paddle
.
to_tensor
(
in2
)
x3
=
paddle
.
to_tensor
(
in3
)
x3
=
paddle
.
to_tensor
(
in3
)
out1
=
fluid
.
layers
.
concat
(
input
=
[
x1
,
x2
,
x3
],
axis
=-
1
)
out1
=
paddle
.
concat
(
[
x1
,
x2
,
x3
],
axis
=-
1
)
out2
=
paddle
.
concat
(
x
=
[
x1
,
x2
],
axis
=
0
)
out2
=
paddle
.
concat
(
x
=
[
x1
,
x2
],
axis
=
0
)
np_out1
=
np
.
concatenate
([
in1
,
in2
,
in3
],
axis
=-
1
)
np_out1
=
np
.
concatenate
([
in1
,
in2
,
in3
],
axis
=-
1
)
np_out2
=
np
.
concatenate
([
in1
,
in2
],
axis
=
0
)
np_out2
=
np
.
concatenate
([
in1
,
in2
],
axis
=
0
)
...
@@ -365,7 +368,7 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -365,7 +368,7 @@ class TestConcatAPI(unittest.TestCase):
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
x4
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x4
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x5
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
x5
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
concat
,
[
x4
,
x5
])
self
.
assertRaises
(
TypeError
,
paddle
.
concat
,
[
x4
,
x5
])
# The type of axis in concat_op should be int or Variable.
# The type of axis in concat_op should be int or Variable.
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
...
@@ -405,7 +408,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -405,7 +408,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
if
use_fluid_api
:
if
use_fluid_api
:
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
...
@@ -414,7 +417,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -414,7 +417,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
for
i
in
range
(
self
.
iter_num
):
for
i
in
range
(
self
.
iter_num
):
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
self
.
out_var
=
fluid
.
layers
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
self
.
out_var
=
paddle
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
else
:
else
:
self
.
program
=
paddle
.
static
.
Program
()
self
.
program
=
paddle
.
static
.
Program
()
with
paddle
.
static
.
program_guard
(
self
.
program
):
with
paddle
.
static
.
program_guard
(
self
.
program
):
...
...
python/paddle/fluid/tests/unittests/test_conditional_block.py
浏览文件 @
b85af464
...
@@ -36,7 +36,7 @@ class ConditionalBlockTest(unittest.TestCase):
...
@@ -36,7 +36,7 @@ class ConditionalBlockTest(unittest.TestCase):
out
=
paddle
.
tensor
.
create_tensor
(
dtype
=
'float32'
)
out
=
paddle
.
tensor
.
create_tensor
(
dtype
=
'float32'
)
with
cond
.
block
():
with
cond
.
block
():
hidden
=
paddle
.
static
.
nn
.
fc
(
x
=
data
,
size
=
10
)
hidden
=
paddle
.
static
.
nn
.
fc
(
x
=
data
,
size
=
10
)
layers
.
assign
(
hidden
,
out
)
paddle
.
assign
(
hidden
,
out
)
cpu
=
core
.
CPUPlace
()
cpu
=
core
.
CPUPlace
()
exe
=
Executor
(
cpu
)
exe
=
Executor
(
cpu
)
...
...
python/paddle/fluid/tests/unittests/test_dataset.py
浏览文件 @
b85af464
...
@@ -949,7 +949,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
...
@@ -949,7 +949,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
data
=
paddle
.
static
.
data
(
data
=
paddle
.
static
.
data
(
name
=
slot
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
slot
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
var
=
fluid
.
layers
.
cast
(
x
=
data
,
dtype
=
'float32'
)
var
=
paddle
.
cast
(
x
=
data
,
dtype
=
'float32'
)
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
var
,
pool_type
=
'AVERAGE'
input
=
var
,
pool_type
=
'AVERAGE'
)
)
...
@@ -957,7 +957,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
...
@@ -957,7 +957,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
slots_vars
.
append
(
data
)
slots_vars
.
append
(
data
)
poolings
.
append
(
pool
)
poolings
.
append
(
pool
)
concated
=
fluid
.
layers
.
concat
(
poolings
,
axis
=
1
)
concated
=
paddle
.
concat
(
poolings
,
axis
=
1
)
fc
=
paddle
.
static
.
nn
.
fc
(
x
=
concated
,
activation
=
'tanh'
,
size
=
32
)
fc
=
paddle
.
static
.
nn
.
fc
(
x
=
concated
,
activation
=
'tanh'
,
size
=
32
)
return
slots_vars
,
fc
return
slots_vars
,
fc
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_heter_program.py
浏览文件 @
b85af464
...
@@ -95,7 +95,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
...
@@ -95,7 +95,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
sparse_embed_seq
=
list
(
map
(
embedding_layer
,
inputs
[
1
:
-
1
]))
sparse_embed_seq
=
list
(
map
(
embedding_layer
,
inputs
[
1
:
-
1
]))
concated
=
fluid
.
layers
.
concat
(
sparse_embed_seq
+
inputs
[
0
:
1
],
axis
=
1
)
concated
=
paddle
.
concat
(
sparse_embed_seq
+
inputs
[
0
:
1
],
axis
=
1
)
with
fluid
.
device_guard
(
"gpu"
):
with
fluid
.
device_guard
(
"gpu"
):
fc1
=
paddle
.
static
.
nn
.
fc
(
fc1
=
paddle
.
static
.
nn
.
fc
(
...
@@ -149,7 +149,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
...
@@ -149,7 +149,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
)
)
with
fluid
.
device_guard
(
"gpu"
):
with
fluid
.
device_guard
(
"gpu"
):
labels
=
fluid
.
layers
.
cast
(
inputs
[
-
1
],
dtype
=
"int64"
)
labels
=
paddle
.
cast
(
inputs
[
-
1
],
dtype
=
"int64"
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
predict
,
label
=
labels
,
reduction
=
'none'
,
use_softmax
=
False
input
=
predict
,
label
=
labels
,
reduction
=
'none'
,
use_softmax
=
False
)
)
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_minimize.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSMinimize(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSMinimize(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps11.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps12.py
浏览文件 @
b85af464
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps13.py
浏览文件 @
b85af464
...
@@ -41,7 +41,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -41,7 +41,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps2.py
浏览文件 @
b85af464
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps3.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps4.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps5.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps6.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_sparse_embedding_ctr.py
浏览文件 @
b85af464
...
@@ -252,7 +252,7 @@ class TestDistMnistAsync2x2WithGauss(TestFleetBase):
...
@@ -252,7 +252,7 @@ class TestDistMnistAsync2x2WithGauss(TestFleetBase):
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_spmt.py
浏览文件 @
b85af464
...
@@ -35,7 +35,7 @@ class TestSPMT(unittest.TestCase):
...
@@ -35,7 +35,7 @@ class TestSPMT(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_transpiler.py
浏览文件 @
b85af464
...
@@ -733,9 +733,7 @@ class TestDistLookupTableBase(TranspilerTest):
...
@@ -733,9 +733,7 @@ class TestDistLookupTableBase(TranspilerTest):
title_emb
=
emb_pool
(
title_ids
,
self
.
lookup_table_name
,
is_distributed
)
title_emb
=
emb_pool
(
title_ids
,
self
.
lookup_table_name
,
is_distributed
)
brand_emb
=
emb_pool
(
brand_ids
,
self
.
lookup_table_name
,
is_distributed
)
brand_emb
=
emb_pool
(
brand_ids
,
self
.
lookup_table_name
,
is_distributed
)
profile_emb
=
emb_pool
(
profile_ids
,
"profile_emb"
,
False
)
profile_emb
=
emb_pool
(
profile_ids
,
"profile_emb"
,
False
)
fc0
=
fluid
.
layers
.
concat
(
fc0
=
paddle
.
concat
([
title_emb
,
brand_emb
,
profile_emb
],
axis
=
1
)
input
=
[
title_emb
,
brand_emb
,
profile_emb
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
fc0
,
x
=
fc0
,
size
=
2
,
size
=
2
,
...
...
python/paddle/fluid/tests/unittests/test_dynamic_rnn_stop_gradient.py
浏览文件 @
b85af464
...
@@ -29,7 +29,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
...
@@ -29,7 +29,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
fluid
.
default_main_program
().
random_seed
=
1
fluid
.
default_main_program
().
random_seed
=
1
np
.
random
.
seed
(
2
)
np
.
random
.
seed
(
2
)
x
=
layers
.
assign
(
x
=
paddle
.
assign
(
np
.
random
.
rand
(
batch_size
,
beam_size
,
32
).
astype
(
"float32"
)
np
.
random
.
rand
(
batch_size
,
beam_size
,
32
).
astype
(
"float32"
)
)
)
indices
=
fluid
.
data
(
shape
=
[
None
,
beam_size
],
dtype
=
"int64"
,
name
=
"indices"
)
indices
=
fluid
.
data
(
shape
=
[
None
,
beam_size
],
dtype
=
"int64"
,
name
=
"indices"
)
...
@@ -43,9 +43,9 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
...
@@ -43,9 +43,9 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
)
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
)
scores
=
paddle
.
tensor
.
array_write
(
x
,
step_idx
)
scores
=
paddle
.
tensor
.
array_write
(
x
,
step_idx
)
with
while_op
.
block
():
with
while_op
.
block
():
bs
=
layers
.
cast
(
paddle
.
shape
(
x
)[
0
],
"int64"
)
bs
=
paddle
.
cast
(
paddle
.
shape
(
x
)[
0
],
"int64"
)
for
_
in
range
(
20
):
for
_
in
range
(
20
):
bs
=
layers
.
cast
(
bs
,
'int64'
)
bs
=
paddle
.
cast
(
bs
,
'int64'
)
bs
.
stop_gradient
=
stop_gradient
bs
.
stop_gradient
=
stop_gradient
batch_pos
=
paddle
.
expand
(
batch_pos
=
paddle
.
expand
(
paddle
.
unsqueeze
(
paddle
.
arange
(
0
,
bs
,
1
,
dtype
=
bs
.
dtype
),
[
1
]),
paddle
.
unsqueeze
(
paddle
.
arange
(
0
,
bs
,
1
,
dtype
=
bs
.
dtype
),
[
1
]),
...
@@ -57,7 +57,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
...
@@ -57,7 +57,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
paddle
.
increment
(
x
=
step_idx
,
value
=
1.0
)
paddle
.
increment
(
x
=
step_idx
,
value
=
1.0
)
paddle
.
tensor
.
array_write
(
score
,
i
=
step_idx
,
array
=
scores
)
paddle
.
tensor
.
array_write
(
score
,
i
=
step_idx
,
array
=
scores
)
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
layers
.
assign
(
length_cond
,
cond
)
paddle
.
assign
(
length_cond
,
cond
)
out
=
tensor_array_to_tensor
(
scores
,
axis
=
0
,
use_stack
=
True
)[
0
]
out
=
tensor_array_to_tensor
(
scores
,
axis
=
0
,
use_stack
=
True
)[
0
]
loss
=
paddle
.
mean
(
out
)
loss
=
paddle
.
mean
(
out
)
...
...
python/paddle/fluid/tests/unittests/test_eager_deletion_padding_rnn.py
浏览文件 @
b85af464
...
@@ -164,7 +164,7 @@ def lm_model(
...
@@ -164,7 +164,7 @@ def lm_model(
weight_1
=
weight_1_arr
[
k
]
weight_1
=
weight_1_arr
[
k
]
bias
=
bias_arr
[
k
]
bias
=
bias_arr
[
k
]
nn
=
layers
.
concat
([
input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -230,8 +230,8 @@ def lm_model(
...
@@ -230,8 +230,8 @@ def lm_model(
)
)
last_cell_array
.
append
(
last_c
)
last_cell_array
.
append
(
last_c
)
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
layers
.
concat
(
last_hidden_array
,
0
)
last_hidden
=
paddle
.
concat
(
last_hidden_array
,
0
)
last_cell
=
layers
.
concat
(
last_cell_array
,
0
)
last_cell
=
paddle
.
concat
(
last_cell_array
,
0
)
return
real_res
,
last_hidden
,
last_cell
return
real_res
,
last_hidden
,
last_cell
...
@@ -288,7 +288,7 @@ def lm_model(
...
@@ -288,7 +288,7 @@ def lm_model(
weight_1
=
weight_1_arr
[
k
]
weight_1
=
weight_1_arr
[
k
]
bias
=
bias_arr
[
k
]
bias
=
bias_arr
[
k
]
nn
=
layers
.
concat
([
input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -314,19 +314,19 @@ def lm_model(
...
@@ -314,19 +314,19 @@ def lm_model(
res
.
append
(
input
)
res
.
append
(
input
)
last_hidden
=
layers
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
last_hidden
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
layers
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
last_cell
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
)
)
last_cell
=
paddle
.
transpose
(
x
=
last_cell
,
perm
=
[
1
,
0
,
2
])
last_cell
=
paddle
.
transpose
(
x
=
last_cell
,
perm
=
[
1
,
0
,
2
])
real_res
=
layers
.
concat
(
res
,
0
)
real_res
=
paddle
.
concat
(
res
,
0
)
real_res
=
paddle
.
reshape
(
real_res
,
shape
=
[
len
,
-
1
,
hidden_size
])
real_res
=
paddle
.
reshape
(
real_res
,
shape
=
[
len
,
-
1
,
hidden_size
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
...
@@ -439,8 +439,8 @@ def lm_model(
...
@@ -439,8 +439,8 @@ def lm_model(
# can be used directly in next batch. This can avoid the fetching of
# can be used directly in next batch. This can avoid the fetching of
# last_hidden and last_cell and feeding of init_hidden and init_cell in
# last_hidden and last_cell and feeding of init_hidden and init_cell in
# each training step.
# each training step.
layers
.
assign
(
input
=
last_cell
,
output
=
init_cell
)
paddle
.
assign
(
last_cell
,
output
=
init_cell
)
layers
.
assign
(
input
=
last_hidden
,
output
=
init_hidden
)
paddle
.
assign
(
last_hidden
,
output
=
init_hidden
)
feeding_list
=
[
'x'
,
'y'
,
'init_hidden'
,
'init_cell'
]
feeding_list
=
[
'x'
,
'y'
,
'init_hidden'
,
'init_cell'
]
return
loss
,
last_hidden
,
last_cell
,
feeding_list
return
loss
,
last_hidden
,
last_cell
,
feeding_list
...
...
python/paddle/fluid/tests/unittests/test_eager_deletion_recurrent_op.py
浏览文件 @
b85af464
...
@@ -427,7 +427,7 @@ class EagerDeletionRecurrentOpMultipleMemoryTest(EagerDeletionRecurrentOpTest1):
...
@@ -427,7 +427,7 @@ class EagerDeletionRecurrentOpMultipleMemoryTest(EagerDeletionRecurrentOpTest1):
mem1
=
paddle
.
scale
(
x
=
h_pre1
,
scale
=
1.0
)
mem1
=
paddle
.
scale
(
x
=
h_pre1
,
scale
=
1.0
)
mem2
=
paddle
.
scale
(
x
=
h_pre2
,
scale
=
1.0
)
mem2
=
paddle
.
scale
(
x
=
h_pre2
,
scale
=
1.0
)
out
=
layers
.
sums
(
input
=
[
mem1
,
x_t
,
mem2
])
out
=
paddle
.
add_n
(
[
mem1
,
x_t
,
mem2
])
rnn
.
update_memory
(
h_pre1
,
mem1
)
rnn
.
update_memory
(
h_pre1
,
mem1
)
rnn
.
update_memory
(
h_pre2
,
mem2
)
rnn
.
update_memory
(
h_pre2
,
mem2
)
...
...
python/paddle/fluid/tests/unittests/test_eager_deletion_while_op.py
浏览文件 @
b85af464
...
@@ -104,7 +104,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
...
@@ -104,7 +104,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
d
=
paddle
.
reshape
(
d
,
shape
=
[
10
])
d
=
paddle
.
reshape
(
d
,
shape
=
[
10
])
prev
=
paddle
.
reshape
(
prev
,
shape
=
[
10
])
prev
=
paddle
.
reshape
(
prev
,
shape
=
[
10
])
result
=
layers
.
sums
(
input
=
[
d
,
prev
])
result
=
paddle
.
add_n
(
[
d
,
prev
])
i
=
paddle
.
increment
(
x
=
i
)
i
=
paddle
.
increment
(
x
=
i
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
...
@@ -114,7 +114,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
...
@@ -114,7 +114,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
d2
=
paddle
.
reshape
(
d2
,
shape
=
[
10
])
d2
=
paddle
.
reshape
(
d2
,
shape
=
[
10
])
prev2
=
paddle
.
reshape
(
prev2
,
shape
=
[
10
])
prev2
=
paddle
.
reshape
(
prev2
,
shape
=
[
10
])
result2
=
layers
.
sums
(
input
=
[
d2
,
prev2
])
result2
=
paddle
.
add_n
(
[
d2
,
prev2
])
j
=
paddle
.
increment
(
x
=
j
)
j
=
paddle
.
increment
(
x
=
j
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
...
...
python/paddle/fluid/tests/unittests/test_embedding_id_stop_gradient.py
浏览文件 @
b85af464
...
@@ -51,7 +51,7 @@ class TestEmbeddingIdStopGradientBase(unittest.TestCase):
...
@@ -51,7 +51,7 @@ class TestEmbeddingIdStopGradientBase(unittest.TestCase):
with
fluid
.
scope_guard
(
scope
):
with
fluid
.
scope_guard
(
scope
):
x_1
=
fluid
.
data
(
name
=
'x1'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x_1
=
fluid
.
data
(
name
=
'x1'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x_2
=
fluid
.
data
(
name
=
'x2'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x_2
=
fluid
.
data
(
name
=
'x2'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x
=
fluid
.
layers
.
concat
([
x_1
,
x_2
],
axis
=-
1
)
x
=
paddle
.
concat
([
x_1
,
x_2
],
axis
=-
1
)
for
_
in
range
(
self
.
reshape_times
):
for
_
in
range
(
self
.
reshape_times
):
x
=
paddle
.
reshape
(
x
,
[
-
1
,
1
])
x
=
paddle
.
reshape
(
x
,
[
-
1
,
1
])
...
...
python/paddle/fluid/tests/unittests/test_fetch_var.py
浏览文件 @
b85af464
...
@@ -18,7 +18,6 @@ import numpy as np
...
@@ -18,7 +18,6 @@ import numpy as np
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.layers
as
layers
class
TestFetchVar
(
unittest
.
TestCase
):
class
TestFetchVar
(
unittest
.
TestCase
):
...
@@ -30,7 +29,7 @@ class TestFetchVar(unittest.TestCase):
...
@@ -30,7 +29,7 @@ class TestFetchVar(unittest.TestCase):
x
=
paddle
.
tensor
.
create_tensor
(
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
"int32"
,
persistable
=
True
,
name
=
"x"
dtype
=
"int32"
,
persistable
=
True
,
name
=
"x"
)
)
layers
.
assign
(
input
=
self
.
val
,
output
=
x
)
paddle
.
assign
(
self
.
val
,
output
=
x
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
.
run
(
fluid
.
default_main_program
(),
feed
=
{},
fetch_list
=
[])
exe
.
run
(
fluid
.
default_main_program
(),
feed
=
{},
fetch_list
=
[])
fetched_x
=
fluid
.
executor
.
_fetch_var
(
"x"
)
fetched_x
=
fluid
.
executor
.
_fetch_var
(
"x"
)
...
...
python/paddle/fluid/tests/unittests/test_fleet.py
浏览文件 @
b85af464
...
@@ -78,7 +78,7 @@ class TestFleet1(unittest.TestCase):
...
@@ -78,7 +78,7 @@ class TestFleet1(unittest.TestCase):
dtype
=
"int64"
,
dtype
=
"int64"
,
lod_level
=
1
,
lod_level
=
1
,
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_nocvm_1.py
浏览文件 @
b85af464
...
@@ -72,7 +72,7 @@ class TestFleet1(unittest.TestCase):
...
@@ -72,7 +72,7 @@ class TestFleet1(unittest.TestCase):
dtype
=
"int64"
,
dtype
=
"int64"
,
lod_level
=
1
,
lod_level
=
1
,
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_rolemaker.py
浏览文件 @
b85af464
...
@@ -89,7 +89,7 @@ class TestCloudRoleMaker(unittest.TestCase):
...
@@ -89,7 +89,7 @@ class TestCloudRoleMaker(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_2.py
浏览文件 @
b85af464
...
@@ -70,7 +70,7 @@ class TestCloudRoleMaker2(unittest.TestCase):
...
@@ -70,7 +70,7 @@ class TestCloudRoleMaker2(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_3.py
浏览文件 @
b85af464
...
@@ -63,7 +63,7 @@ class TestCloudRoleMaker(unittest.TestCase):
...
@@ -63,7 +63,7 @@ class TestCloudRoleMaker(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_unitaccessor.py
浏览文件 @
b85af464
...
@@ -66,7 +66,7 @@ class TestFleet1(unittest.TestCase):
...
@@ -66,7 +66,7 @@ class TestFleet1(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
strategy
=
{}
strategy
=
{}
...
...
python/paddle/fluid/tests/unittests/test_imperative_auto_prune.py
浏览文件 @
b85af464
...
@@ -88,8 +88,8 @@ class AutoPruneLayer2(fluid.Layer):
...
@@ -88,8 +88,8 @@ class AutoPruneLayer2(fluid.Layer):
def
forward
(
self
,
x
,
label
):
def
forward
(
self
,
x
,
label
):
feature
=
self
.
linear
(
x
)
feature
=
self
.
linear
(
x
)
label
=
self
.
linear2
(
label
)
label
=
self
.
linear2
(
label
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"float32"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"float32"
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'int64'
)
label
=
paddle
.
cast
(
label
,
dtype
=
'int64'
)
# Note that the label is not persistable in paddle.nn.functional.cross_entropy.
# Note that the label is not persistable in paddle.nn.functional.cross_entropy.
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
feature
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
input
=
feature
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
...
@@ -244,7 +244,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -244,7 +244,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
out1
=
linear
(
a
)
out1
=
linear
(
a
)
out2
=
linear2
(
b
)
out2
=
linear2
(
b
)
out1
.
stop_gradient
=
True
out1
.
stop_gradient
=
True
out
=
fluid
.
layers
.
concat
(
input
=
[
out1
,
out2
,
c
],
axis
=
1
)
out
=
paddle
.
concat
(
[
out1
,
out2
,
c
],
axis
=
1
)
out
.
backward
()
out
.
backward
()
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
...
@@ -262,7 +262,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -262,7 +262,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
out1
=
linear
(
a
)
out1
=
linear
(
a
)
out2
=
linear2
(
b
)
out2
=
linear2
(
b
)
out1
.
stop_gradient
=
True
out1
.
stop_gradient
=
True
out
=
fluid
.
layers
.
concat
(
input
=
[
out1
,
out2
,
c
],
axis
=
1
)
out
=
paddle
.
concat
(
[
out1
,
out2
,
c
],
axis
=
1
)
out
.
backward
()
out
.
backward
()
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
...
@@ -338,7 +338,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -338,7 +338,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
out1
=
linear
(
a
)
out1
=
linear
(
a
)
out2
=
linear2
(
b
)
out2
=
linear2
(
b
)
out1
.
stop_gradient
=
True
out1
.
stop_gradient
=
True
out
=
fluid
.
layers
.
concat
(
input
=
[
out1
,
out2
,
c
],
axis
=
1
)
out
=
paddle
.
concat
(
[
out1
,
out2
,
c
],
axis
=
1
)
# TODO(jiabin): In Eager Mode we don't actually need sort_sum_gradient, this test should be removed when we don't support fluid anymore.
# TODO(jiabin): In Eager Mode we don't actually need sort_sum_gradient, this test should be removed when we don't support fluid anymore.
fluid
.
set_flags
({
'FLAGS_sort_sum_gradient'
:
True
})
fluid
.
set_flags
({
'FLAGS_sort_sum_gradient'
:
True
})
out
.
backward
()
out
.
backward
()
...
@@ -413,8 +413,8 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -413,8 +413,8 @@ class TestImperativeAutoPrune(unittest.TestCase):
linear
=
paddle
.
nn
.
Linear
(
1
,
1
)
linear
=
paddle
.
nn
.
Linear
(
1
,
1
)
label
=
fluid
.
dygraph
.
to_variable
(
value1
).
astype
(
"float32"
)
label
=
fluid
.
dygraph
.
to_variable
(
value1
).
astype
(
"float32"
)
label
=
linear
(
label
)
label
=
linear
(
label
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"float32"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"float32"
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'int64'
)
label
=
paddle
.
cast
(
label
,
dtype
=
'int64'
)
out
=
paddle
.
nn
.
functional
.
one_hot
(
label
,
100
)
out
=
paddle
.
nn
.
functional
.
one_hot
(
label
,
100
)
loss
=
paddle
.
mean
(
out
)
loss
=
paddle
.
mean
(
out
)
loss
.
backward
()
loss
.
backward
()
...
...
python/paddle/fluid/tests/unittests/test_imperative_deepcf.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_imperative_ocr_attention_model.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_imperative_ptb_rnn.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_imperative_save_load_v2.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_imperative_star_gan_with_gradient_penalty.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_layers.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_math_op_patch.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_mix_precision_all_reduce_fuse.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_nn_functional_hot_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_nonzero_api.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_one_hot_v2_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_optimizer_grad.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_recurrent_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_reduce_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_regularizer.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_regularizer_api.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_stack_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_static_save_load.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_sum_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_switch.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_tensor_array_to_tensor.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_variable.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_weight_decay.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_where_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/test_while_op.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/xpu/test_assign_value_op_xpu.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/xpu/test_cast_op_xpu.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/xpu/test_one_hot_v2_op_xpu.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/xpu/test_sum_op_xpu.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/tests/unittests/xpu/test_while_op_xpu.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/fluid/variable_index.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/geometric/message_passing/utils.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/incubate/autograd/composite_rules.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/incubate/autograd/primitives.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/incubate/operators/graph_send_recv.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/incubate/optimizer/modelaverage.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/jit/dy2static/convert_operators.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/jit/dy2static/utils.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/nn/clip.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/nn/functional/loss.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/nn/layer/rnn.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/static/amp/decorator.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
python/paddle/static/nn/control_flow.py
浏览文件 @
b85af464
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录