Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
b85af464
P
Paddle
项目概览
PaddlePaddle
/
Paddle
大约 2 年 前同步成功
通知
2325
Star
20933
Fork
5424
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
b85af464
编写于
2月 14, 2023
作者:
M
mhy-666
提交者:
GitHub
2月 14, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
remove layers.tensor.argmin/argmax/assign/cast/concat/sums (#49944)
上级
61a933ac
变更
144
隐藏空白更改
内联
并排
Showing
144 changed file
with
381 addition
and
976 deletion
+381
-976
python/paddle/distribution/categorical.py
python/paddle/distribution/categorical.py
+1
-1
python/paddle/distribution/distribution.py
python/paddle/distribution/distribution.py
+2
-2
python/paddle/distribution/normal.py
python/paddle/distribution/normal.py
+2
-2
python/paddle/distribution/uniform.py
python/paddle/distribution/uniform.py
+6
-6
python/paddle/fluid/contrib/extend_optimizer/extend_optimizer_with_weight_decay.py
...ib/extend_optimizer/extend_optimizer_with_weight_decay.py
+1
-1
python/paddle/fluid/contrib/tests/test_multi_precision_fp16_train.py
...le/fluid/contrib/tests/test_multi_precision_fp16_train.py
+2
-2
python/paddle/fluid/contrib/tests/test_weight_decay_extend.py
...on/paddle/fluid/contrib/tests/test_weight_decay_extend.py
+1
-1
python/paddle/fluid/dygraph/parallel.py
python/paddle/fluid/dygraph/parallel.py
+1
-1
python/paddle/fluid/framework.py
python/paddle/fluid/framework.py
+2
-2
python/paddle/fluid/incubate/fleet/tests/fleet_deep_ctr.py
python/paddle/fluid/incubate/fleet/tests/fleet_deep_ctr.py
+1
-1
python/paddle/fluid/layers/control_flow.py
python/paddle/fluid/layers/control_flow.py
+5
-5
python/paddle/fluid/layers/learning_rate_scheduler.py
python/paddle/fluid/layers/learning_rate_scheduler.py
+4
-4
python/paddle/fluid/layers/nn.py
python/paddle/fluid/layers/nn.py
+1
-1
python/paddle/fluid/layers/tensor.py
python/paddle/fluid/layers/tensor.py
+2
-576
python/paddle/fluid/layers/utils.py
python/paddle/fluid/layers/utils.py
+4
-3
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+15
-17
python/paddle/fluid/tests/book/test_recommender_system.py
python/paddle/fluid/tests/book/test_recommender_system.py
+4
-4
python/paddle/fluid/tests/book/test_word2vec_book.py
python/paddle/fluid/tests/book/test_word2vec_book.py
+2
-2
python/paddle/fluid/tests/unittests/auto_parallel/test_fp16_assign.py
...e/fluid/tests/unittests/auto_parallel/test_fp16_assign.py
+1
-1
python/paddle/fluid/tests/unittests/check_nan_inf_base.py
python/paddle/fluid/tests/unittests/check_nan_inf_base.py
+1
-1
python/paddle/fluid/tests/unittests/collective/collective_sendrecv_op_array.py
...ests/unittests/collective/collective_sendrecv_op_array.py
+4
-12
python/paddle/fluid/tests/unittests/collective/fleet/hybrid_parallel_inference_helper.py
...ests/collective/fleet/hybrid_parallel_inference_helper.py
+4
-4
python/paddle/fluid/tests/unittests/dist_ctr.py
python/paddle/fluid/tests/unittests/dist_ctr.py
+1
-1
python/paddle/fluid/tests/unittests/dist_fleet_ctr.py
python/paddle/fluid/tests/unittests/dist_fleet_ctr.py
+1
-1
python/paddle/fluid/tests/unittests/dist_fleet_heter_pipeline_ctr.py
...le/fluid/tests/unittests/dist_fleet_heter_pipeline_ctr.py
+2
-2
python/paddle/fluid/tests/unittests/dist_fleet_simnet_bow.py
python/paddle/fluid/tests/unittests/dist_fleet_simnet_bow.py
+1
-1
python/paddle/fluid/tests/unittests/dist_fleet_sparse_embedding_ctr.py
.../fluid/tests/unittests/dist_fleet_sparse_embedding_ctr.py
+2
-1
python/paddle/fluid/tests/unittests/dist_transformer.py
python/paddle/fluid/tests/unittests/dist_transformer.py
+6
-6
python/paddle/fluid/tests/unittests/dist_word2vec.py
python/paddle/fluid/tests/unittests/dist_word2vec.py
+2
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/bert_dygraph_model.py
...d/tests/unittests/dygraph_to_static/bert_dygraph_model.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/seq2seq_dygraph_model.py
...ests/unittests/dygraph_to_static/seq2seq_dygraph_model.py
+6
-6
python/paddle/fluid/tests/unittests/dygraph_to_static/simnet_dygraph_model.py
...tests/unittests/dygraph_to_static/simnet_dygraph_model.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ast_util.py
.../fluid/tests/unittests/dygraph_to_static/test_ast_util.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_basic_api_transformation.py
...ttests/dygraph_to_static/test_basic_api_transformation.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_bmn.py
...addle/fluid/tests/unittests/dygraph_to_static/test_bmn.py
+16
-16
python/paddle/fluid/tests/unittests/dygraph_to_static/test_convert_call.py
...id/tests/unittests/dygraph_to_static/test_convert_call.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_lac.py
...addle/fluid/tests/unittests/dygraph_to_static/test_lac.py
+2
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/test_list.py
...ddle/fluid/tests/unittests/dygraph_to_static/test_list.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ptb_lm.py
...le/fluid/tests/unittests/dygraph_to_static/test_ptb_lm.py
+4
-4
python/paddle/fluid/tests/unittests/dygraph_to_static/test_reinforcement_learning.py
...nittests/dygraph_to_static/test_reinforcement_learning.py
+1
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_sentiment.py
...fluid/tests/unittests/dygraph_to_static/test_sentiment.py
+2
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/transformer_dygraph_model.py
.../unittests/dygraph_to_static/transformer_dygraph_model.py
+3
-3
python/paddle/fluid/tests/unittests/dygraph_to_static/yolov3.py
.../paddle/fluid/tests/unittests/dygraph_to_static/yolov3.py
+4
-6
python/paddle/fluid/tests/unittests/fleet_heter_ps_training.py
...n/paddle/fluid/tests/unittests/fleet_heter_ps_training.py
+2
-2
python/paddle/fluid/tests/unittests/fleet_ps_training.py
python/paddle/fluid/tests/unittests/fleet_ps_training.py
+2
-2
python/paddle/fluid/tests/unittests/ipu/test_arg_max_op_ipu.py
...n/paddle/fluid/tests/unittests/ipu/test_arg_max_op_ipu.py
+1
-1
python/paddle/fluid/tests/unittests/ipu/test_arg_min_op_ipu.py
...n/paddle/fluid/tests/unittests/ipu/test_arg_min_op_ipu.py
+1
-1
python/paddle/fluid/tests/unittests/ipu/test_concat_op_ipu.py
...on/paddle/fluid/tests/unittests/ipu/test_concat_op_ipu.py
+1
-1
python/paddle/fluid/tests/unittests/ir/inference/test_trt_slice_plugin.py
...uid/tests/unittests/ir/inference/test_trt_slice_plugin.py
+2
-2
python/paddle/fluid/tests/unittests/ir/inference/test_trt_subgraph_pass.py
...id/tests/unittests/ir/inference/test_trt_subgraph_pass.py
+1
-1
python/paddle/fluid/tests/unittests/ir/inference/test_trt_transpose_flatten_concat_fuse_pass.py
.../inference/test_trt_transpose_flatten_concat_fuse_pass.py
+1
-1
python/paddle/fluid/tests/unittests/ir/test_ir_fusion_group_pass.py
...dle/fluid/tests/unittests/ir/test_ir_fusion_group_pass.py
+7
-7
python/paddle/fluid/tests/unittests/mlu/sync_batch_norm_op_mlu.py
...addle/fluid/tests/unittests/mlu/sync_batch_norm_op_mlu.py
+2
-2
python/paddle/fluid/tests/unittests/mlu/test_cast_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_cast_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_one_hot_v2_op_mlu.py
...addle/fluid/tests/unittests/mlu/test_one_hot_v2_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_where_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_where_op_mlu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_assign_value_op_npu.py
...dle/fluid/tests/unittests/npu/test_assign_value_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_concat_op_npu.py
...on/paddle/fluid/tests/unittests/npu/test_concat_op_npu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_one_hot_v2_op_npu.py
...addle/fluid/tests/unittests/npu/test_one_hot_v2_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_stack_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_stack_op_npu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_while_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_while_op_npu.py
+9
-9
python/paddle/fluid/tests/unittests/sequence/test_sequence_pad_op.py
...le/fluid/tests/unittests/sequence/test_sequence_pad_op.py
+7
-9
python/paddle/fluid/tests/unittests/standalone_executor/test_standalone_multiply_write.py
...sts/standalone_executor/test_standalone_multiply_write.py
+2
-2
python/paddle/fluid/tests/unittests/test_array_read_write_op.py
.../paddle/fluid/tests/unittests/test_array_read_write_op.py
+4
-6
python/paddle/fluid/tests/unittests/test_assign_op.py
python/paddle/fluid/tests/unittests/test_assign_op.py
+7
-7
python/paddle/fluid/tests/unittests/test_assign_value_op.py
python/paddle/fluid/tests/unittests/test_assign_value_op.py
+1
-2
python/paddle/fluid/tests/unittests/test_cast_op.py
python/paddle/fluid/tests/unittests/test_cast_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_communicator_geo.py
python/paddle/fluid/tests/unittests/test_communicator_geo.py
+2
-1
python/paddle/fluid/tests/unittests/test_compare_op.py
python/paddle/fluid/tests/unittests/test_compare_op.py
+6
-6
python/paddle/fluid/tests/unittests/test_compare_reduce_op.py
...on/paddle/fluid/tests/unittests/test_compare_reduce_op.py
+2
-3
python/paddle/fluid/tests/unittests/test_concat_op.py
python/paddle/fluid/tests/unittests/test_concat_op.py
+17
-14
python/paddle/fluid/tests/unittests/test_conditional_block.py
...on/paddle/fluid/tests/unittests/test_conditional_block.py
+1
-1
python/paddle/fluid/tests/unittests/test_dataset.py
python/paddle/fluid/tests/unittests/test_dataset.py
+2
-2
python/paddle/fluid/tests/unittests/test_dist_fleet_heter_program.py
...le/fluid/tests/unittests/test_dist_fleet_heter_program.py
+2
-2
python/paddle/fluid/tests/unittests/test_dist_fleet_minimize.py
.../paddle/fluid/tests/unittests/test_dist_fleet_minimize.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps11.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps11.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps12.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps12.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps13.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps13.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps2.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps2.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps3.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps3.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps4.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps4.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps5.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps5.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_ps6.py
python/paddle/fluid/tests/unittests/test_dist_fleet_ps6.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_sparse_embedding_ctr.py
...d/tests/unittests/test_dist_fleet_sparse_embedding_ctr.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_fleet_spmt.py
python/paddle/fluid/tests/unittests/test_dist_fleet_spmt.py
+1
-1
python/paddle/fluid/tests/unittests/test_dist_transpiler.py
python/paddle/fluid/tests/unittests/test_dist_transpiler.py
+1
-3
python/paddle/fluid/tests/unittests/test_dynamic_rnn_stop_gradient.py
...e/fluid/tests/unittests/test_dynamic_rnn_stop_gradient.py
+4
-4
python/paddle/fluid/tests/unittests/test_eager_deletion_padding_rnn.py
.../fluid/tests/unittests/test_eager_deletion_padding_rnn.py
+9
-9
python/paddle/fluid/tests/unittests/test_eager_deletion_recurrent_op.py
...fluid/tests/unittests/test_eager_deletion_recurrent_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_eager_deletion_while_op.py
...dle/fluid/tests/unittests/test_eager_deletion_while_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_embedding_id_stop_gradient.py
.../fluid/tests/unittests/test_embedding_id_stop_gradient.py
+1
-1
python/paddle/fluid/tests/unittests/test_fetch_var.py
python/paddle/fluid/tests/unittests/test_fetch_var.py
+1
-2
python/paddle/fluid/tests/unittests/test_fleet.py
python/paddle/fluid/tests/unittests/test_fleet.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_nocvm_1.py
python/paddle/fluid/tests/unittests/test_fleet_nocvm_1.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_rolemaker.py
python/paddle/fluid/tests/unittests/test_fleet_rolemaker.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_2.py
...on/paddle/fluid/tests/unittests/test_fleet_rolemaker_2.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_3.py
...on/paddle/fluid/tests/unittests/test_fleet_rolemaker_3.py
+1
-1
python/paddle/fluid/tests/unittests/test_fleet_unitaccessor.py
...n/paddle/fluid/tests/unittests/test_fleet_unitaccessor.py
+1
-1
python/paddle/fluid/tests/unittests/test_imperative_auto_prune.py
...addle/fluid/tests/unittests/test_imperative_auto_prune.py
+7
-7
python/paddle/fluid/tests/unittests/test_imperative_deepcf.py
...on/paddle/fluid/tests/unittests/test_imperative_deepcf.py
+2
-4
python/paddle/fluid/tests/unittests/test_imperative_ocr_attention_model.py
...id/tests/unittests/test_imperative_ocr_attention_model.py
+3
-5
python/paddle/fluid/tests/unittests/test_imperative_ptb_rnn.py
...n/paddle/fluid/tests/unittests/test_imperative_ptb_rnn.py
+4
-4
python/paddle/fluid/tests/unittests/test_imperative_save_load_v2.py
...dle/fluid/tests/unittests/test_imperative_save_load_v2.py
+4
-4
python/paddle/fluid/tests/unittests/test_imperative_star_gan_with_gradient_penalty.py
...ittests/test_imperative_star_gan_with_gradient_penalty.py
+1
-1
python/paddle/fluid/tests/unittests/test_layers.py
python/paddle/fluid/tests/unittests/test_layers.py
+3
-3
python/paddle/fluid/tests/unittests/test_math_op_patch.py
python/paddle/fluid/tests/unittests/test_math_op_patch.py
+1
-1
python/paddle/fluid/tests/unittests/test_mix_precision_all_reduce_fuse.py
...uid/tests/unittests/test_mix_precision_all_reduce_fuse.py
+2
-2
python/paddle/fluid/tests/unittests/test_nn_functional_hot_op.py
...paddle/fluid/tests/unittests/test_nn_functional_hot_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_nonzero_api.py
python/paddle/fluid/tests/unittests/test_nonzero_api.py
+2
-2
python/paddle/fluid/tests/unittests/test_one_hot_v2_op.py
python/paddle/fluid/tests/unittests/test_one_hot_v2_op.py
+1
-1
python/paddle/fluid/tests/unittests/test_optimizer_grad.py
python/paddle/fluid/tests/unittests/test_optimizer_grad.py
+1
-1
python/paddle/fluid/tests/unittests/test_recurrent_op.py
python/paddle/fluid/tests/unittests/test_recurrent_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_reduce_op.py
python/paddle/fluid/tests/unittests/test_reduce_op.py
+4
-4
python/paddle/fluid/tests/unittests/test_regularizer.py
python/paddle/fluid/tests/unittests/test_regularizer.py
+1
-1
python/paddle/fluid/tests/unittests/test_regularizer_api.py
python/paddle/fluid/tests/unittests/test_regularizer_api.py
+1
-1
python/paddle/fluid/tests/unittests/test_stack_op.py
python/paddle/fluid/tests/unittests/test_stack_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_static_save_load.py
python/paddle/fluid/tests/unittests/test_static_save_load.py
+5
-5
python/paddle/fluid/tests/unittests/test_sum_op.py
python/paddle/fluid/tests/unittests/test_sum_op.py
+6
-5
python/paddle/fluid/tests/unittests/test_switch.py
python/paddle/fluid/tests/unittests/test_switch.py
+6
-6
python/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
...n/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_tensor_array_to_tensor.py
...ddle/fluid/tests/unittests/test_tensor_array_to_tensor.py
+4
-4
python/paddle/fluid/tests/unittests/test_variable.py
python/paddle/fluid/tests/unittests/test_variable.py
+1
-1
python/paddle/fluid/tests/unittests/test_weight_decay.py
python/paddle/fluid/tests/unittests/test_weight_decay.py
+1
-1
python/paddle/fluid/tests/unittests/test_where_op.py
python/paddle/fluid/tests/unittests/test_where_op.py
+2
-2
python/paddle/fluid/tests/unittests/test_while_op.py
python/paddle/fluid/tests/unittests/test_while_op.py
+5
-4
python/paddle/fluid/tests/unittests/xpu/test_assign_value_op_xpu.py
...dle/fluid/tests/unittests/xpu/test_assign_value_op_xpu.py
+1
-2
python/paddle/fluid/tests/unittests/xpu/test_cast_op_xpu.py
python/paddle/fluid/tests/unittests/xpu/test_cast_op_xpu.py
+1
-1
python/paddle/fluid/tests/unittests/xpu/test_one_hot_v2_op_xpu.py
...addle/fluid/tests/unittests/xpu/test_one_hot_v2_op_xpu.py
+1
-1
python/paddle/fluid/tests/unittests/xpu/test_sum_op_xpu.py
python/paddle/fluid/tests/unittests/xpu/test_sum_op_xpu.py
+6
-5
python/paddle/fluid/tests/unittests/xpu/test_while_op_xpu.py
python/paddle/fluid/tests/unittests/xpu/test_while_op_xpu.py
+3
-3
python/paddle/fluid/variable_index.py
python/paddle/fluid/variable_index.py
+3
-7
python/paddle/geometric/message_passing/utils.py
python/paddle/geometric/message_passing/utils.py
+1
-2
python/paddle/incubate/autograd/composite_rules.py
python/paddle/incubate/autograd/composite_rules.py
+8
-4
python/paddle/incubate/autograd/primitives.py
python/paddle/incubate/autograd/primitives.py
+4
-4
python/paddle/incubate/operators/graph_send_recv.py
python/paddle/incubate/operators/graph_send_recv.py
+2
-2
python/paddle/incubate/optimizer/modelaverage.py
python/paddle/incubate/optimizer/modelaverage.py
+5
-5
python/paddle/jit/dy2static/convert_operators.py
python/paddle/jit/dy2static/convert_operators.py
+5
-5
python/paddle/jit/dy2static/utils.py
python/paddle/jit/dy2static/utils.py
+1
-2
python/paddle/nn/clip.py
python/paddle/nn/clip.py
+6
-6
python/paddle/nn/functional/loss.py
python/paddle/nn/functional/loss.py
+1
-1
python/paddle/nn/layer/rnn.py
python/paddle/nn/layer/rnn.py
+1
-1
python/paddle/static/amp/decorator.py
python/paddle/static/amp/decorator.py
+1
-2
python/paddle/static/nn/control_flow.py
python/paddle/static/nn/control_flow.py
+2
-3
未找到文件。
python/paddle/distribution/categorical.py
浏览文件 @
b85af464
...
@@ -112,7 +112,7 @@ class Categorical(distribution.Distribution):
...
@@ -112,7 +112,7 @@ class Categorical(distribution.Distribution):
self
.
dtype
=
logits
.
dtype
self
.
dtype
=
logits
.
dtype
self
.
logits
=
self
.
_to_tensor
(
logits
)[
0
]
self
.
logits
=
self
.
_to_tensor
(
logits
)[
0
]
if
self
.
dtype
!=
convert_dtype
(
self
.
logits
.
dtype
):
if
self
.
dtype
!=
convert_dtype
(
self
.
logits
.
dtype
):
self
.
logits
=
tensor
.
cast
(
self
.
logits
,
dtype
=
self
.
dtype
)
self
.
logits
=
paddle
.
cast
(
self
.
logits
,
dtype
=
self
.
dtype
)
dist_sum
=
paddle
.
sum
(
self
.
logits
,
axis
=-
1
,
keepdim
=
True
)
dist_sum
=
paddle
.
sum
(
self
.
logits
,
axis
=-
1
,
keepdim
=
True
)
self
.
_prob
=
self
.
logits
/
dist_sum
self
.
_prob
=
self
.
logits
/
dist_sum
...
...
python/paddle/distribution/distribution.py
浏览文件 @
b85af464
...
@@ -200,7 +200,7 @@ class Distribution:
...
@@ -200,7 +200,7 @@ class Distribution:
for
arg
in
numpy_args
:
for
arg
in
numpy_args
:
arg_broadcasted
,
_
=
np
.
broadcast_arrays
(
arg
,
tmp
)
arg_broadcasted
,
_
=
np
.
broadcast_arrays
(
arg
,
tmp
)
arg_variable
=
paddle
.
tensor
.
create_tensor
(
dtype
=
dtype
)
arg_variable
=
paddle
.
tensor
.
create_tensor
(
dtype
=
dtype
)
tensor
.
assign
(
arg_broadcasted
,
arg_variable
)
paddle
.
assign
(
arg_broadcasted
,
arg_variable
)
variable_args
.
append
(
arg_variable
)
variable_args
.
append
(
arg_variable
)
return
tuple
(
variable_args
)
return
tuple
(
variable_args
)
...
@@ -235,7 +235,7 @@ class Distribution:
...
@@ -235,7 +235,7 @@ class Distribution:
warnings
.
warn
(
warnings
.
warn
(
"dtype of input 'value' needs to be the same as parameters of distribution class. dtype of 'value' will be converted."
"dtype of input 'value' needs to be the same as parameters of distribution class. dtype of 'value' will be converted."
)
)
return
tensor
.
cast
(
value
,
dtype
=
param
.
dtype
)
return
paddle
.
cast
(
value
,
dtype
=
param
.
dtype
)
return
value
return
value
def
_probs_to_logits
(
self
,
probs
,
is_binary
=
False
):
def
_probs_to_logits
(
self
,
probs
,
is_binary
=
False
):
...
...
python/paddle/distribution/normal.py
浏览文件 @
b85af464
...
@@ -132,8 +132,8 @@ class Normal(distribution.Distribution):
...
@@ -132,8 +132,8 @@ class Normal(distribution.Distribution):
# pylint: disable=unbalanced-tuple-unpacking
# pylint: disable=unbalanced-tuple-unpacking
self
.
loc
,
self
.
scale
=
self
.
_to_tensor
(
loc
,
scale
)
self
.
loc
,
self
.
scale
=
self
.
_to_tensor
(
loc
,
scale
)
if
self
.
dtype
!=
convert_dtype
(
self
.
loc
.
dtype
):
if
self
.
dtype
!=
convert_dtype
(
self
.
loc
.
dtype
):
self
.
loc
=
tensor
.
cast
(
self
.
loc
,
dtype
=
self
.
dtype
)
self
.
loc
=
paddle
.
cast
(
self
.
loc
,
dtype
=
self
.
dtype
)
self
.
scale
=
tensor
.
cast
(
self
.
scale
,
dtype
=
self
.
dtype
)
self
.
scale
=
paddle
.
cast
(
self
.
scale
,
dtype
=
self
.
dtype
)
super
().
__init__
(
self
.
loc
.
shape
)
super
().
__init__
(
self
.
loc
.
shape
)
@
property
@
property
...
...
python/paddle/distribution/uniform.py
浏览文件 @
b85af464
...
@@ -137,8 +137,8 @@ class Uniform(distribution.Distribution):
...
@@ -137,8 +137,8 @@ class Uniform(distribution.Distribution):
# pylint: disable=unbalanced-tuple-unpacking
# pylint: disable=unbalanced-tuple-unpacking
self
.
low
,
self
.
high
=
self
.
_to_tensor
(
low
,
high
)
self
.
low
,
self
.
high
=
self
.
_to_tensor
(
low
,
high
)
if
self
.
dtype
!=
convert_dtype
(
self
.
low
.
dtype
):
if
self
.
dtype
!=
convert_dtype
(
self
.
low
.
dtype
):
self
.
low
=
tensor
.
cast
(
self
.
low
,
dtype
=
self
.
dtype
)
self
.
low
=
paddle
.
cast
(
self
.
low
,
dtype
=
self
.
dtype
)
self
.
high
=
tensor
.
cast
(
self
.
high
,
dtype
=
self
.
dtype
)
self
.
high
=
paddle
.
cast
(
self
.
high
,
dtype
=
self
.
dtype
)
super
().
__init__
(
self
.
low
.
shape
)
super
().
__init__
(
self
.
low
.
shape
)
...
@@ -218,8 +218,8 @@ class Uniform(distribution.Distribution):
...
@@ -218,8 +218,8 @@ class Uniform(distribution.Distribution):
name
=
self
.
name
+
'_log_prob'
name
=
self
.
name
+
'_log_prob'
lb_bool
=
self
.
low
<
value
lb_bool
=
self
.
low
<
value
ub_bool
=
value
<
self
.
high
ub_bool
=
value
<
self
.
high
lb
=
tensor
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
lb
=
paddle
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
ub
=
tensor
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
ub
=
paddle
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
return
paddle
.
subtract
(
return
paddle
.
subtract
(
paddle
.
log
(
lb
*
ub
),
paddle
.
log
(
self
.
high
-
self
.
low
),
name
=
name
paddle
.
log
(
lb
*
ub
),
paddle
.
log
(
self
.
high
-
self
.
low
),
name
=
name
)
)
...
@@ -245,8 +245,8 @@ class Uniform(distribution.Distribution):
...
@@ -245,8 +245,8 @@ class Uniform(distribution.Distribution):
name
=
self
.
name
+
'_probs'
name
=
self
.
name
+
'_probs'
lb_bool
=
self
.
low
<
value
lb_bool
=
self
.
low
<
value
ub_bool
=
value
<
self
.
high
ub_bool
=
value
<
self
.
high
lb
=
tensor
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
lb
=
paddle
.
cast
(
lb_bool
,
dtype
=
value
.
dtype
)
ub
=
tensor
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
ub
=
paddle
.
cast
(
ub_bool
,
dtype
=
value
.
dtype
)
return
paddle
.
divide
((
lb
*
ub
),
(
self
.
high
-
self
.
low
),
name
=
name
)
return
paddle
.
divide
((
lb
*
ub
),
(
self
.
high
-
self
.
low
),
name
=
name
)
def
entropy
(
self
):
def
entropy
(
self
):
...
...
python/paddle/fluid/contrib/extend_optimizer/extend_optimizer_with_weight_decay.py
浏览文件 @
b85af464
...
@@ -96,7 +96,7 @@ class DecoupledWeightDecay:
...
@@ -96,7 +96,7 @@ class DecoupledWeightDecay:
[
param
,
grad
]
[
param
,
grad
]
),
framework
.
name_scope
(
'weight decay'
):
),
framework
.
name_scope
(
'weight decay'
):
updated_param
=
paddle
.
subtract
(
x
=
param
,
y
=
scaled_param
)
updated_param
=
paddle
.
subtract
(
x
=
param
,
y
=
scaled_param
)
paddle
.
fluid
.
layers
.
assign
(
input
=
updated_param
,
output
=
param
)
paddle
.
assign
(
updated_param
,
output
=
param
)
optimize_ops
=
self
.
apply_optimize
(
optimize_ops
=
self
.
apply_optimize
(
loss
=
loss
,
loss
=
loss
,
...
...
python/paddle/fluid/contrib/tests/test_multi_precision_fp16_train.py
浏览文件 @
b85af464
...
@@ -295,9 +295,9 @@ class TestAmpWithNonIterableDataLoader(unittest.TestCase):
...
@@ -295,9 +295,9 @@ class TestAmpWithNonIterableDataLoader(unittest.TestCase):
)
)
with
fluid
.
layers
.
control_flow
.
Switch
()
as
switch
:
with
fluid
.
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
label
!=
zero_var
):
with
switch
.
case
(
label
!=
zero_var
):
fluid
.
layers
.
assign
(
input
=
zero_var
,
output
=
label
)
paddle
.
assign
(
zero_var
,
output
=
label
)
with
switch
.
default
():
with
switch
.
default
():
fluid
.
layers
.
assign
(
input
=
one_var
,
output
=
label
)
paddle
.
assign
(
one_var
,
output
=
label
)
net
=
resnet_cifar10
(
image
)
net
=
resnet_cifar10
(
image
)
logits
=
paddle
.
static
.
nn
.
fc
(
logits
=
paddle
.
static
.
nn
.
fc
(
...
...
python/paddle/fluid/contrib/tests/test_weight_decay_extend.py
浏览文件 @
b85af464
...
@@ -182,7 +182,7 @@ class TestWeightDecay(unittest.TestCase):
...
@@ -182,7 +182,7 @@ class TestWeightDecay(unittest.TestCase):
for
params
in
param_list
:
for
params
in
param_list
:
updated_p
=
paddle
.
subtract
(
x
=
params
[
0
],
y
=
params
[
1
])
updated_p
=
paddle
.
subtract
(
x
=
params
[
0
],
y
=
params
[
1
])
fluid
.
layers
.
assign
(
input
=
updated_p
,
output
=
params
[
0
])
paddle
.
assign
(
updated_p
,
output
=
params
[
0
])
optimizer
.
apply_optimize
(
avg_cost
,
startup_prog
,
params_grads
)
optimizer
.
apply_optimize
(
avg_cost
,
startup_prog
,
params_grads
)
...
...
python/paddle/fluid/dygraph/parallel.py
浏览文件 @
b85af464
...
@@ -65,7 +65,7 @@ def _coalesce_tensors(var_groups):
...
@@ -65,7 +65,7 @@ def _coalesce_tensors(var_groups):
flattened_vars
.
append
(
flattened_vars
.
append
(
paddle
.
reshape
(
x
=
g_var
,
shape
=
[
np
.
prod
(
g_var
.
shape
)])
paddle
.
reshape
(
x
=
g_var
,
shape
=
[
np
.
prod
(
g_var
.
shape
)])
)
)
coalesced_grad
=
nn
.
concat
(
flattened_vars
)
coalesced_grad
=
paddle
.
concat
(
flattened_vars
)
coalesced_grads_and_grad_vars
.
append
(
coalesced_grads_and_grad_vars
.
append
(
[
coalesced_grad
,
grad_vars
,
g_var_shapes
]
[
coalesced_grad
,
grad_vars
,
g_var_shapes
]
)
)
...
...
python/paddle/fluid/framework.py
浏览文件 @
b85af464
...
@@ -1703,7 +1703,7 @@ class Variable(metaclass=VariableMetaClass):
...
@@ -1703,7 +1703,7 @@ class Variable(metaclass=VariableMetaClass):
tmp = fluid.dygraph.base.to_variable(x)
tmp = fluid.dygraph.base.to_variable(x)
tmp.stop_gradient=False
tmp.stop_gradient=False
inputs2.append(tmp)
inputs2.append(tmp)
ret2 =
fluid.layers.sums
(inputs2)
ret2 =
paddle.add_n
(inputs2)
loss2 = paddle.sum(ret2)
loss2 = paddle.sum(ret2)
loss2.backward()
loss2.backward()
print(loss2.gradient())
print(loss2.gradient())
...
@@ -1751,7 +1751,7 @@ class Variable(metaclass=VariableMetaClass):
...
@@ -1751,7 +1751,7 @@ class Variable(metaclass=VariableMetaClass):
tmp = fluid.dygraph.base.to_variable(x)
tmp = fluid.dygraph.base.to_variable(x)
tmp.stop_gradient=False
tmp.stop_gradient=False
inputs2.append(tmp)
inputs2.append(tmp)
ret2 =
fluid.layers.sums
(inputs2)
ret2 =
paddle.add_n
(inputs2)
loss2 = paddle.sum(ret2)
loss2 = paddle.sum(ret2)
loss2.backward()
loss2.backward()
print(loss2.gradient())
print(loss2.gradient())
...
...
python/paddle/fluid/incubate/fleet/tests/fleet_deep_ctr.py
浏览文件 @
b85af464
...
@@ -144,7 +144,7 @@ def model():
...
@@ -144,7 +144,7 @@ def model():
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
acc
=
paddle
.
static
.
accuracy
(
input
=
predict
,
label
=
label
)
acc
=
paddle
.
static
.
accuracy
(
input
=
predict
,
label
=
label
)
...
...
python/paddle/fluid/layers/control_flow.py
浏览文件 @
b85af464
...
@@ -15,7 +15,7 @@
...
@@ -15,7 +15,7 @@
from
..wrapped_decorator
import
signature_safe_contextmanager
from
..wrapped_decorator
import
signature_safe_contextmanager
from
.layer_function_generator
import
templatedoc
from
.layer_function_generator
import
templatedoc
from
.tensor
import
assign
,
cast
,
fill_constant
from
.tensor
import
fill_constant
from
..
import
core
from
..
import
core
from
..framework
import
(
from
..framework
import
(
Program
,
Program
,
...
@@ -1058,7 +1058,7 @@ def assign_skip_lod_tensor_array(input, output):
...
@@ -1058,7 +1058,7 @@ def assign_skip_lod_tensor_array(input, output):
if
isinstance
(
output
,
Variable
)
and
isinstance
(
if
isinstance
(
output
,
Variable
)
and
isinstance
(
input
,
support_ret_buildin_type
input
,
support_ret_buildin_type
):
):
assign
(
input
,
output
)
paddle
.
assign
(
input
,
output
)
else
:
else
:
output
=
input
output
=
input
return
return
...
@@ -1069,7 +1069,7 @@ def assign_skip_lod_tensor_array(input, output):
...
@@ -1069,7 +1069,7 @@ def assign_skip_lod_tensor_array(input, output):
main_program
.
current_block
().
parent_idx
main_program
.
current_block
().
parent_idx
)
)
if
parent_block
and
not
parent_block
.
_find_var_recursive
(
input
.
name
):
if
parent_block
and
not
parent_block
.
_find_var_recursive
(
input
.
name
):
assign
(
input
,
output
)
paddle
.
assign
(
input
,
output
)
else
:
else
:
if
(
if
(
isinstance
(
output
,
Variable
)
isinstance
(
output
,
Variable
)
...
@@ -1081,7 +1081,7 @@ def assign_skip_lod_tensor_array(input, output):
...
@@ -1081,7 +1081,7 @@ def assign_skip_lod_tensor_array(input, output):
input
.
shape
,
output
.
shape
input
.
shape
,
output
.
shape
)
)
)
)
assign
(
input
,
output
)
paddle
.
assign
(
input
,
output
)
# (TODO: Mine) There exists dependency (jit.dy2static.convert_operators). It will be removed later.
# (TODO: Mine) There exists dependency (jit.dy2static.convert_operators). It will be removed later.
...
@@ -1195,7 +1195,7 @@ def while_loop(cond, body, loop_vars, is_test=False, name=None):
...
@@ -1195,7 +1195,7 @@ def while_loop(cond, body, loop_vars, is_test=False, name=None):
)
)
now_cond
=
cond
(
*
output_vars
)
now_cond
=
cond
(
*
output_vars
)
map_structure
(
assign_skip_lod_tensor_array
,
output_vars
,
loop_vars
)
map_structure
(
assign_skip_lod_tensor_array
,
output_vars
,
loop_vars
)
assign
(
now_cond
,
pre_cond
)
paddle
.
assign
(
now_cond
,
pre_cond
)
return
loop_vars
return
loop_vars
...
...
python/paddle/fluid/layers/learning_rate_scheduler.py
浏览文件 @
b85af464
...
@@ -55,7 +55,7 @@ def _decay_step_counter(begin=0):
...
@@ -55,7 +55,7 @@ def _decay_step_counter(begin=0):
global_step
=
nn
.
autoincreased_step_counter
(
global_step
=
nn
.
autoincreased_step_counter
(
counter_name
=
'@LR_DECAY_COUNTER@'
,
begin
=
begin
,
step
=
1
counter_name
=
'@LR_DECAY_COUNTER@'
,
begin
=
begin
,
step
=
1
)
)
global_step
=
tensor
.
cast
(
global_step
,
'float32'
)
global_step
=
paddle
.
cast
(
global_step
,
'float32'
)
return
global_step
return
global_step
...
@@ -361,7 +361,7 @@ def polynomial_decay(
...
@@ -361,7 +361,7 @@ def polynomial_decay(
with
control_flow
.
Switch
()
as
switch
:
with
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
global_step
==
zero_var
):
with
switch
.
case
(
global_step
==
zero_var
):
tensor
.
assign
(
input
=
one_var
,
output
=
div_res
)
paddle
.
assign
(
one_var
,
output
=
div_res
)
decay_steps
=
decay_steps
*
div_res
decay_steps
=
decay_steps
*
div_res
else
:
else
:
decay_steps_var
=
tensor
.
fill_constant
(
decay_steps_var
=
tensor
.
fill_constant
(
...
@@ -595,11 +595,11 @@ def linear_lr_warmup(learning_rate, warmup_steps, start_lr, end_lr):
...
@@ -595,11 +595,11 @@ def linear_lr_warmup(learning_rate, warmup_steps, start_lr, end_lr):
decayed_lr
=
start_lr
+
linear_step
*
(
decayed_lr
=
start_lr
+
linear_step
*
(
global_step
/
float
(
warmup_steps
)
global_step
/
float
(
warmup_steps
)
)
)
tensor
.
assign
(
decayed_lr
,
lr
)
paddle
.
assign
(
decayed_lr
,
lr
)
with
switch
.
default
():
with
switch
.
default
():
if
not
isinstance
(
learning_rate
,
Variable
):
if
not
isinstance
(
learning_rate
,
Variable
):
learning_rate
=
tensor
.
fill_constant
(
learning_rate
=
tensor
.
fill_constant
(
shape
=
[
1
],
dtype
=
dtype
,
value
=
float
(
learning_rate
)
shape
=
[
1
],
dtype
=
dtype
,
value
=
float
(
learning_rate
)
)
)
tensor
.
assign
(
learning_rate
,
lr
)
paddle
.
assign
(
learning_rate
,
lr
)
return
lr
return
lr
python/paddle/fluid/layers/nn.py
浏览文件 @
b85af464
...
@@ -41,7 +41,7 @@ from .layer_function_generator import (
...
@@ -41,7 +41,7 @@ from .layer_function_generator import (
templatedoc
,
templatedoc
,
_generate_doc_string_
,
_generate_doc_string_
,
)
)
from
.tensor
import
concat
,
assign
,
fill_constant
,
zeros
from
.tensor
import
fill_constant
,
zeros
from
.
import
utils
from
.
import
utils
from
..
import
unique_name
from
..
import
unique_name
from
..
import
core
from
..
import
core
...
...
python/paddle/fluid/layers/tensor.py
浏览文件 @
b85af464
...
@@ -12,6 +12,7 @@
...
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
import
paddle
import
numpy
import
numpy
import
warnings
import
warnings
...
@@ -39,449 +40,12 @@ from .utils import check_shape
...
@@ -39,449 +40,12 @@ from .utils import check_shape
from
paddle
import
_C_ops
,
_legacy_C_ops
from
paddle
import
_C_ops
,
_legacy_C_ops
__all__
=
[
__all__
=
[
'cast'
,
'concat'
,
'sums'
,
'assign'
,
'fill_constant_batch_size_like'
,
'fill_constant_batch_size_like'
,
'fill_constant'
,
'fill_constant'
,
'argmin'
,
'argmax'
,
'zeros'
,
'zeros'
,
]
]
def
cast
(
x
,
dtype
):
"""
This OP takes in the Tensor :attr:`x` with :attr:`x.dtype` and casts it
to the output with :attr:`dtype`. It's meaningless if the output dtype
equals the input dtype, but it's fine if you do so.
Args:
x(Tensor): An input N-D Tensor with data type bool, float16,
float32, float64, int32, int64, uint8.
dtype(np.dtype|str): Data type of the output:
bool, float16, float32, float64, int8, int32, int64, uint8.
Returns:
Tensor: A Tensor with the same shape as input's.
Examples:
.. code-block:: python
import paddle
x = paddle.to_tensor([2, 3, 4], 'float64')
y = paddle.cast(x, 'uint8')
"""
if
in_dygraph_mode
():
if
not
isinstance
(
dtype
,
core
.
VarDesc
.
VarType
):
dtype
=
convert_np_dtype_to_dtype_
(
dtype
)
return
_C_ops
.
cast
(
x
,
dtype
)
else
:
check_variable_and_dtype
(
x
,
'x'
,
[
'bool'
,
'float16'
,
'float32'
,
'float64'
,
'int16'
,
'int32'
,
'int64'
,
'uint8'
,
'uint16'
,
],
'cast'
,
)
check_dtype
(
dtype
,
'dtype'
,
[
'bool'
,
'float16'
,
'float32'
,
'float64'
,
'int8'
,
'int16'
,
'int32'
,
'int64'
,
'uint8'
,
'uint16'
,
],
'cast'
,
)
helper
=
LayerHelper
(
'cast'
,
**
locals
())
out
=
helper
.
create_variable_for_type_inference
(
dtype
=
dtype
,
stop_gradient
=
x
.
stop_gradient
)
helper
.
append_op
(
type
=
'cast'
,
inputs
=
{
'X'
:
[
x
]},
outputs
=
{
'Out'
:
[
out
]},
attrs
=
{
'in_dtype'
:
x
.
dtype
,
'out_dtype'
:
out
.
dtype
},
)
return
out
def
concat
(
input
,
axis
=
0
,
name
=
None
):
"""
This OP concatenates the input along the axis.
Args:
input(list|tuple|Tensor): ``input`` can be Tensor, Tensor list or Tensor tuple which is with data type
bool, float16, float32, float64, int32, int64. All the Tensors in ``input`` must have the same data type.
axis(int|Tensor, optional): Specify the axis to operate on the input Tensors.
It's a scalar with data type int or a Tensor with shape [1] and data type int32 or int64.
The effective range is [-R, R), where R is Rank(x). When ``axis < 0``, it works the same way
as ``axis+R``. Default is 0.
name (str, optional): The default value is None. Normally there is no
need for user to set this property. For more information, please
refer to :ref:`api_guide_Name`.
Returns:
Tensor: A Tensor with the same data type as ``input``.
Examples:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
in1 = np.array([[1, 2, 3],
[4, 5, 6]])
in2 = np.array([[11, 12, 13],
[14, 15, 16]])
in3 = np.array([[21, 22],
[23, 24]])
with fluid.dygraph.guard():
x1 = fluid.dygraph.to_variable(in1)
x2 = fluid.dygraph.to_variable(in2)
x3 = fluid.dygraph.to_variable(in3)
# When the axis is negative, the real axis is (axis + Rank(x)).
# As follows, axis is -1, Rank(x) is 2, the real axis is 1
out1 = fluid.layers.concat(input=[x1, x2, x3], axis=-1)
out2 = fluid.layers.concat(input=[x1, x2], axis=0)
print(out1.numpy())
# [[ 1 2 3 11 12 13 21 22]
# [ 4 5 6 14 15 16 23 24]]
print(out2.numpy())
# [[ 1 2 3]
# [ 4 5 6]
# [11 12 13]
# [14 15 16]]
"""
if
in_dygraph_mode
():
if
isinstance
(
axis
,
Variable
):
axis
=
axis
.
numpy
()
axis
=
axis
.
item
(
0
)
if
not
isinstance
(
input
,
Variable
):
input
=
[
t
for
t
in
input
if
t
.
shape
.
count
(
0
)
==
0
]
out
=
_C_ops
.
concat
(
input
,
axis
)
return
out
else
:
check_type
(
input
,
'input'
,
(
list
,
tuple
,
Variable
),
'concat'
)
if
not
isinstance
(
input
,
Variable
):
for
id
,
x
in
enumerate
(
input
):
check_variable_and_dtype
(
x
,
'input['
+
str
(
id
)
+
']'
,
[
'bool'
,
'float16'
,
'float32'
,
'float64'
,
'int32'
,
'int64'
],
'concat'
,
)
if
x
.
dtype
!=
input
[
0
].
dtype
:
raise
TypeError
(
"All the Tensors in the input must have the same data type."
)
else
:
input
=
[
input
]
check_type
(
axis
,
'axis'
,
(
int
,
Variable
),
'concat'
)
if
isinstance
(
axis
,
Variable
):
check_dtype
(
axis
.
dtype
,
'axis'
,
[
'int32'
,
'int64'
],
'concat'
,
"The data type of axis must be int32 or int64 when axis is a Tensor"
,
)
helper
=
LayerHelper
(
'concat'
,
**
locals
())
out
=
helper
.
create_variable_for_type_inference
(
dtype
=
helper
.
input_dtype
()
)
if
input
[
0
].
desc
.
type
()
==
core
.
VarDesc
.
VarType
.
LOD_TENSOR_ARRAY
:
# NOTE(liym27): Don't remove this if branch!
# This feature is supported for Dynamic-to-Static, because after transformed, the type of inputs[0]
# is LOD_TENSOR_ARRAY in some scenarios. And this feature can be used in static mode.
assert
len
(
input
)
==
1
,
(
"If the elements of 'input' in concat are Variable(LoDTensorArray), "
"number of the elements must be 1, but received %s."
%
len
(
input
)
)
out_index
=
helper
.
create_variable_for_type_inference
(
dtype
=
"int32"
)
helper
.
append_op
(
type
=
'tensor_array_to_tensor'
,
inputs
=
{
'X'
:
input
[
0
]},
outputs
=
{
'Out'
:
[
out
],
'OutIndex'
:
[
out_index
]},
attrs
=
{
'axis'
:
axis
,
'use_stack'
:
False
},
)
else
:
inputs
=
{
'X'
:
input
}
attrs
=
{}
if
isinstance
(
axis
,
Variable
):
axis
.
stop_gradient
=
True
attrs
[
'axis'
]
=
axis
helper
.
append_op
(
type
=
'concat'
,
inputs
=
inputs
,
outputs
=
{
'Out'
:
[
out
]},
attrs
=
attrs
,
)
return
out
def
sums
(
input
,
out
=
None
):
r
"""
This function computes the sum of multiple input Tensors elementwisely.
- Case 1, sum of 3 Tensors
.. code-block:: text
# Input Tensors
x0.shape = [2, 3]
x0.data = [[1., 2., 3.],
[4., 5., 6.]]
x1.shape = [2, 3]
x1.data = [[10., 20., 30.],
[40., 50., 60.]]
x2.shape = [2, 3]
x2.data = [[100., 200., 300.],
[400., 500., 600.]]
# Output Tensor
out.shape = [2, 3]
out.data = [[111., 222., 333.],
[444., 555., 666.]]
Args:
input (list): A list of Variables which hold input Tensors with the same
data type and shape. Optional data types are: float32, float64, int32, int64.
out (Variable, optional): Output Tensor. It can be any existing Variable.
The default value is None, then a new Variable will be created and returned.
Returns:
Variable: The sum of inputs. The shape and data type is the same with input. \
If :code:`out` is not None, the returned value is :code:`out` .
Examples:
.. code-block:: python
import paddle.fluid as fluid
x0 = fluid.layers.fill_constant(shape=[16, 32], dtype='int64', value=1)
x1 = fluid.layers.fill_constant(shape=[16, 32], dtype='int64', value=2)
x2 = fluid.layers.fill_constant(shape=[16, 32], dtype='int64', value=3)
x3 = fluid.layers.fill_constant(shape=[16, 32], dtype='int64', value=0)
# Sum of multiple Tensors, the result is stored to a new Variable sum0 (sum0=x0+x1+x2, the value is [[6, ..., 6], ..., [6, ..., 6]])
sum0 = fluid.layers.sums(input=[x0, x1, x2])
# Sum of multiple Tensors, sum1 and x3 represents the same Variable (x3=x0+x1+x2, the value is [[6, ..., 6], ..., [6, ..., 6]])
sum1 = fluid.layers.sums(input=[x0, x1, x2], out=x3)
"""
check_type
(
input
,
'input'
,
(
Variable
,
tuple
,
list
),
'sums'
)
if
isinstance
(
input
,
list
)
or
isinstance
(
input
,
tuple
):
for
input_section
in
input
:
check_variable_and_dtype
(
input_section
,
"input"
,
[
'float16'
,
'float32'
,
'float64'
,
'int32'
,
'int64'
],
'sums'
,
)
else
:
check_variable_and_dtype
(
input
,
"input"
,
[
'float16'
,
'float32'
,
'float64'
,
'int32'
,
'int64'
],
'sums'
,
)
helper
=
LayerHelper
(
'sum'
,
**
locals
())
if
out
is
None
:
out
=
helper
.
create_variable_for_type_inference
(
dtype
=
helper
.
input_dtype
()
)
else
:
check_variable_and_dtype
(
out
,
"out"
,
[
'float32'
,
'float64'
,
'int32'
,
'int64'
],
'sums'
)
helper
.
append_op
(
type
=
'sum'
,
inputs
=
{
'X'
:
input
},
outputs
=
{
'Out'
:
out
},
attrs
=
{
'use_mkldnn'
:
False
},
)
return
out
def
assign
(
input
,
output
=
None
):
"""
The OP copies the :attr:`input` to the :attr:`output`.
Parameters:
input (Tensor|numpy.ndarray|list|tuple|scalar): A tensor, numpy ndarray, tuple/list of scalar,
or scalar. Its data type supports float16, float32, float64, int32, int64, and bool.
Note: the float64 data will be converted to float32 because of current platform protobuf
data limitation.
output (Tensor, optional): A tensor. If :attr:`output` is None, a new tensor will
be created as :attr:`output`. Default: None.
Returns:
Tensor: A tensor with the same shape, data type and value as :attr:`input`.
Examples:
.. code-block:: python
import paddle
import numpy as np
data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
array = np.array([[1, 1],
[3, 4],
[1, 3]]).astype(np.int64)
result1 = paddle.zeros(shape=[3, 3], dtype='float32')
paddle.assign(array, result1) # result1 = [[1, 1], [3 4], [1, 3]]
result2 = paddle.assign(data) # result2 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
result3 = paddle.assign(np.array([[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]], dtype='float32')) # result3 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
"""
helper
=
LayerHelper
(
'assign'
,
**
locals
())
check_type
(
input
,
'input'
,
(
Variable
,
numpy
.
ndarray
,
list
,
tuple
,
float
,
int
,
bool
),
'assign'
,
)
is_inplace
=
True
if
output
is
not
None
else
False
if
numpy
.
isscalar
(
input
)
and
not
isinstance
(
input
,
str
):
input
=
numpy
.
array
([
input
])
elif
isinstance
(
input
,
(
list
,
tuple
)):
input
=
numpy
.
array
(
input
)
# NOTE(Aurelius84): Why we judge core.VarBase?
# In case of @to_static, a VarBase can be as input of `assign`,
# but in_dygraph_mode()==False under @to_static, which means
# isinstance(VarBase, Variable) == False. It will cause return None
# after this api.
if
isinstance
(
input
,
(
Variable
,
core
.
VarBase
)):
if
in_dygraph_mode
():
if
output
is
None
:
output
=
_C_ops
.
assign
(
input
)
else
:
_C_ops
.
assign_out_
(
input
,
output
)
else
:
check_dtype
(
input
.
dtype
,
'input'
,
[
'float16'
,
'uint16'
,
'float32'
,
'float64'
,
'int32'
,
'int64'
,
'uint8'
,
'bool'
,
],
'assign'
,
'(When the type of input in assign is Variable.)'
,
)
if
output
is
None
:
output
=
helper
.
create_variable_for_type_inference
(
dtype
=
input
.
dtype
)
helper
.
append_op
(
type
=
'assign'
,
inputs
=
{
'X'
:
[
input
]},
outputs
=
{
'Out'
:
[
output
]}
)
elif
isinstance
(
input
,
numpy
.
ndarray
):
# Not support [var, var, ...] currently.
if
len
(
input
.
shape
)
>
0
and
any
(
isinstance
(
x
,
Variable
)
for
x
in
input
):
raise
TypeError
(
"Required type(input) numpy.ndarray, but found `list(Variable)` in input."
)
dtype
=
convert_np_dtype_to_dtype_
(
input
.
dtype
)
if
dtype
==
VarDesc
.
VarType
.
FP64
:
# Setting FP64 numpy data is not supported in Paddle, so we
# use FP32 here
warnings
.
warn
(
"paddle.assign doesn't support float64 input now due "
"to current platform protobuf data limitation, we convert "
"it to float32"
)
dtype
=
VarDesc
.
VarType
.
FP32
if
dtype
==
VarDesc
.
VarType
.
BOOL
:
value_name
=
"bool_values"
values
=
[
int
(
v
)
for
v
in
input
.
flat
]
elif
dtype
==
VarDesc
.
VarType
.
FP32
:
value_name
=
"fp32_values"
values
=
[
float
(
v
)
for
v
in
input
.
flat
]
elif
dtype
==
VarDesc
.
VarType
.
INT32
:
value_name
=
"int32_values"
values
=
[
int
(
v
)
for
v
in
input
.
flat
]
elif
dtype
==
VarDesc
.
VarType
.
INT64
:
value_name
=
"int64_values"
values
=
[
int
(
v
)
for
v
in
input
.
flat
]
else
:
raise
TypeError
(
"When the type of 'input' in assign is numpy.ndarray, "
"the data type of 'input' must be bool, float32, int32 or int64, but "
"received %s."
%
convert_dtype
(
dtype
)
)
if
input
.
size
>
1024
*
1024
:
raise
ValueError
(
"The size of input is too big. Please consider "
"saving it to file and 'load_op' to load it"
)
if
in_dygraph_mode
():
if
output
is
None
:
output
=
zeros
(
list
(
input
.
shape
),
dtype
)
_C_ops
.
assign_value_
(
output
,
list
(
input
.
shape
),
dtype
,
values
,
_current_expected_place
(),
)
else
:
if
output
is
None
:
output
=
helper
.
create_variable_for_type_inference
(
dtype
=
input
.
dtype
)
helper
.
append_op
(
type
=
'assign_value'
,
outputs
=
{
'Out'
:
[
output
]},
attrs
=
{
'dtype'
:
dtype
,
'shape'
:
list
(
input
.
shape
),
value_name
:
values
,
},
)
if
is_inplace
and
in_dygraph_mode
():
output
.
_bump_inplace_version
()
return
output
def
fill_constant
(
shape
,
dtype
,
value
,
force_cpu
=
False
,
out
=
None
,
name
=
None
):
def
fill_constant
(
shape
,
dtype
,
value
,
force_cpu
=
False
,
out
=
None
,
name
=
None
):
"""
"""
...
@@ -565,7 +129,7 @@ def fill_constant(shape, dtype, value, force_cpu=False, out=None, name=None):
...
@@ -565,7 +129,7 @@ def fill_constant(shape, dtype, value, force_cpu=False, out=None, name=None):
inputs
=
{}
inputs
=
{}
if
isinstance
(
value
,
Variable
):
if
isinstance
(
value
,
Variable
):
if
convert_dtype
(
value
.
dtype
)
!=
dtype
:
if
convert_dtype
(
value
.
dtype
)
!=
dtype
:
value
=
cast
(
value
,
dtype
)
value
=
paddle
.
cast
(
value
,
dtype
)
inputs
[
'ValueTensor'
]
=
value
inputs
[
'ValueTensor'
]
=
value
check_shape
(
shape
)
check_shape
(
shape
)
...
@@ -694,144 +258,6 @@ def fill_constant_batch_size_like(
...
@@ -694,144 +258,6 @@ def fill_constant_batch_size_like(
return
out
return
out
def
argmin
(
x
,
axis
=
0
):
"""
:alias_main: paddle.argmin
:alias: paddle.argmin,paddle.tensor.argmin,paddle.tensor.search.argmin
:old_api: paddle.fluid.layers.argmin
**argmin**
This OP computes the indices of the min elements of the input tensor's
element along the provided axis.
Args:
x(Variable): An input N-D Tensor with type float32, float64, int16,
int32, int64, uint8.
axis(int, optional): Axis to compute indices along. The effective range
is [-R, R), where R is Rank(x). when axis<0, it works the same way
as axis+R. Default is 0.
Returns:
Variable: A Tensor with data type int64.
Examples:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
in1 = np.array([[[5,8,9,5],
[0,0,1,7],
[6,9,2,4]],
[[5,2,4,2],
[4,7,7,9],
[1,7,0,6]]])
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(in1)
out1 = fluid.layers.argmin(x=x, axis=-1)
out2 = fluid.layers.argmin(x=x, axis=0)
out3 = fluid.layers.argmin(x=x, axis=1)
out4 = fluid.layers.argmin(x=x, axis=2)
print(out1.numpy())
# [[0 0 2]
# [1 0 2]]
print(out2.numpy())
# [[0 1 1 1]
# [0 0 0 0]
# [1 1 1 0]]
print(out3.numpy())
# [[1 1 1 2]
# [2 0 2 0]]
print(out4.numpy())
# [[0 0 2]
# [1 0 2]]
"""
check_variable_and_dtype
(
x
,
'x'
,
[
'float32'
,
'float64'
,
'uint8'
,
'int16'
,
'int32'
,
'int64'
],
'argmin'
,
)
helper
=
LayerHelper
(
"arg_min"
,
**
locals
())
out
=
helper
.
create_variable_for_type_inference
(
VarDesc
.
VarType
.
INT64
)
helper
.
append_op
(
type
=
'arg_min'
,
inputs
=
{
'X'
:
x
},
outputs
=
{
'Out'
:
[
out
]},
attrs
=
{
'axis'
:
axis
},
)
out
.
stop_gradient
=
True
return
out
def
argmax
(
x
,
axis
=
0
):
"""
**argmax**
This OP computes the indices of the max elements of the input tensor's
element along the provided axis.
Args:
x(Variable): An input N-D Tensor with type float32, float64, int16,
int32, int64, uint8.
axis(int, optional): Axis to compute indices along. The effective range
is [-R, R), where R is Rank(x). when axis<0, it works the same way
as axis+R. Default is 0.
Returns:
Variable: A Tensor with data type int64.
Examples:
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
in1 = np.array([[[5,8,9,5],
[0,0,1,7],
[6,9,2,4]],
[[5,2,4,2],
[4,7,7,9],
[1,7,0,6]]])
with fluid.dygraph.guard():
x = fluid.dygraph.to_variable(in1)
out1 = fluid.layers.argmax(x=x, axis=-1)
out2 = fluid.layers.argmax(x=x, axis=0)
out3 = fluid.layers.argmax(x=x, axis=1)
out4 = fluid.layers.argmax(x=x, axis=2)
print(out1.numpy())
# [[2 3 1]
# [0 3 1]]
print(out2.numpy())
# [[0 0 0 0]
# [1 1 1 1]
# [0 0 0 1]]
print(out3.numpy())
# [[2 2 0 1]
# [0 1 1 1]]
print(out4.numpy())
# [[2 3 1]
# [0 3 1]]
"""
check_variable_and_dtype
(
x
,
'x'
,
[
'float32'
,
'float64'
,
'uint8'
,
'int16'
,
'int32'
,
'int64'
],
'argmax'
,
)
helper
=
LayerHelper
(
"arg_max"
,
**
locals
())
out
=
helper
.
create_variable_for_type_inference
(
VarDesc
.
VarType
.
INT64
)
helper
.
append_op
(
type
=
'arg_max'
,
inputs
=
{
'X'
:
x
},
outputs
=
{
'Out'
:
[
out
]},
attrs
=
{
'axis'
:
axis
},
)
out
.
stop_gradient
=
True
return
out
def
zeros
(
shape
,
dtype
,
force_cpu
=
False
,
name
=
None
):
def
zeros
(
shape
,
dtype
,
force_cpu
=
False
,
name
=
None
):
"""
"""
The OP creates a tensor of specified :attr:`shape` and :attr:`dtype`, and fills it with 0.
The OP creates a tensor of specified :attr:`shape` and :attr:`dtype`, and fills it with 0.
...
...
python/paddle/fluid/layers/utils.py
浏览文件 @
b85af464
...
@@ -12,6 +12,7 @@
...
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
import
paddle
import
collections
import
collections
import
copy
import
copy
import
numpy
as
np
import
numpy
as
np
...
@@ -387,7 +388,7 @@ def _contain_var(list_or_tuple):
...
@@ -387,7 +388,7 @@ def _contain_var(list_or_tuple):
def
get_shape_tensor_inputs
(
inputs
,
attrs
,
shape
,
op_type
):
def
get_shape_tensor_inputs
(
inputs
,
attrs
,
shape
,
op_type
):
from
.tensor
import
fill_constant
,
cast
from
.tensor
import
fill_constant
def
_get_attr_shape
(
list_shape
):
def
_get_attr_shape
(
list_shape
):
attr_shape
=
[]
attr_shape
=
[]
...
@@ -411,7 +412,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
...
@@ -411,7 +412,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
'(When type of shape in'
+
op_type
+
'is list or tuple.)'
,
'(When type of shape in'
+
op_type
+
'is list or tuple.)'
,
)
)
if
convert_dtype
(
dim
.
dtype
)
==
'int64'
:
if
convert_dtype
(
dim
.
dtype
)
==
'int64'
:
dim
=
cast
(
x
=
dim
,
dtype
=
'int32'
)
dim
=
paddle
.
cast
(
x
=
dim
,
dtype
=
'int32'
)
shape_tensor_list
.
append
(
dim
)
shape_tensor_list
.
append
(
dim
)
else
:
else
:
temp_out
=
fill_constant
([
1
],
'int32'
,
dim
,
force_cpu
=
True
)
temp_out
=
fill_constant
([
1
],
'int32'
,
dim
,
force_cpu
=
True
)
...
@@ -428,7 +429,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
...
@@ -428,7 +429,7 @@ def get_shape_tensor_inputs(inputs, attrs, shape, op_type):
'(When type of shape in'
+
op_type
+
' is Variable.)'
,
'(When type of shape in'
+
op_type
+
' is Variable.)'
,
)
)
if
convert_dtype
(
shape
.
dtype
)
==
'int64'
:
if
convert_dtype
(
shape
.
dtype
)
==
'int64'
:
shape
=
cast
(
shape
,
'int32'
)
shape
=
paddle
.
cast
(
shape
,
'int32'
)
inputs
[
"ShapeTensor"
]
=
shape
inputs
[
"ShapeTensor"
]
=
shape
elif
isinstance
(
shape
,
(
list
,
tuple
)):
elif
isinstance
(
shape
,
(
list
,
tuple
)):
attrs
[
"shape"
]
=
_get_attr_shape
(
shape
)
attrs
[
"shape"
]
=
_get_attr_shape
(
shape
)
...
...
python/paddle/fluid/optimizer.py
浏览文件 @
b85af464
...
@@ -3920,14 +3920,14 @@ class ModelAverage(Optimizer):
...
@@ -3920,14 +3920,14 @@ class ModelAverage(Optimizer):
self
.
_get_accumulator
(
'num_updates'
,
param
)
self
.
_get_accumulator
(
'num_updates'
,
param
)
)
)
# backup param value to grad
# backup param value to grad
layers
.
assign
(
input
=
param
,
output
=
grad
)
paddle
.
assign
(
param
,
output
=
grad
)
# param = (sum_1 + sum_2 + sum_3) / (num_accumulates + old_num_accumulates)
# param = (sum_1 + sum_2 + sum_3) / (num_accumulates + old_num_accumulates)
tmp
=
paddle
.
add_n
([
num_accumulates
,
old_num_accumulates
])
tmp
=
paddle
.
add_n
([
num_accumulates
,
old_num_accumulates
])
sum
=
paddle
.
add_n
([
sum_1
,
sum_2
,
sum_3
])
sum
=
paddle
.
add_n
([
sum_1
,
sum_2
,
sum_3
])
tmp
=
layers
.
cast
(
tmp
=
paddle
.
cast
(
x
=
tmp
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
x
=
tmp
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
)
)
sum
=
layers
.
cast
(
sum
=
paddle
.
cast
(
x
=
sum
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
x
=
sum
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
)
)
paddle
.
assign
(
paddle
.
divide
(
sum
,
tmp
),
output
=
param
)
paddle
.
assign
(
paddle
.
divide
(
sum
,
tmp
),
output
=
param
)
...
@@ -3935,7 +3935,7 @@ class ModelAverage(Optimizer):
...
@@ -3935,7 +3935,7 @@ class ModelAverage(Optimizer):
def
_add_average_restore_op
(
self
,
block
,
param_grad
):
def
_add_average_restore_op
(
self
,
block
,
param_grad
):
param
=
block
.
_clone_variable
(
param_grad
[
0
])
param
=
block
.
_clone_variable
(
param_grad
[
0
])
grad
=
block
.
_clone_variable
(
param_grad
[
1
])
grad
=
block
.
_clone_variable
(
param_grad
[
1
])
layers
.
assign
(
input
=
grad
,
output
=
param
)
paddle
.
assign
(
grad
,
output
=
param
)
def
_append_average_accumulate_op
(
self
,
param
):
def
_append_average_accumulate_op
(
self
,
param
):
self
.
helper
=
LayerHelper
(
"average_accumulate"
)
self
.
helper
=
LayerHelper
(
"average_accumulate"
)
...
@@ -4229,15 +4229,13 @@ class ExponentialMovingAverage:
...
@@ -4229,15 +4229,13 @@ class ExponentialMovingAverage:
param
=
block
.
_clone_variable
(
param
)
param
=
block
.
_clone_variable
(
param
)
tmp
=
block
.
_clone_variable
(
tmp
)
tmp
=
block
.
_clone_variable
(
tmp
)
ema
=
block
.
_clone_variable
(
self
.
_ema_vars
[
param
.
name
])
ema
=
block
.
_clone_variable
(
self
.
_ema_vars
[
param
.
name
])
layers
.
assign
(
input
=
param
,
output
=
tmp
)
paddle
.
assign
(
param
,
output
=
tmp
)
# bias correction
# bias correction
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
global_step
>
0
):
with
switch
.
case
(
global_step
>
0
):
layers
.
assign
(
paddle
.
assign
(
ema
/
(
1.0
-
decay_pow
),
output
=
param
)
output
=
param
,
input
=
ema
/
(
1.0
-
decay_pow
)
)
with
switch
.
default
():
with
switch
.
default
():
layers
.
assign
(
output
=
param
,
input
=
ema
)
paddle
.
assign
(
ema
,
output
=
param
)
self
.
restore_program
=
Program
()
self
.
restore_program
=
Program
()
block
=
self
.
restore_program
.
global_block
()
block
=
self
.
restore_program
.
global_block
()
...
@@ -4245,7 +4243,7 @@ class ExponentialMovingAverage:
...
@@ -4245,7 +4243,7 @@ class ExponentialMovingAverage:
for
param
,
tmp
in
self
.
_params_tmps
:
for
param
,
tmp
in
self
.
_params_tmps
:
tmp
=
block
.
_clone_variable
(
tmp
)
tmp
=
block
.
_clone_variable
(
tmp
)
param
=
block
.
_clone_variable
(
param
)
param
=
block
.
_clone_variable
(
param
)
layers
.
assign
(
input
=
tmp
,
output
=
param
)
paddle
.
assign
(
tmp
,
output
=
param
)
def
_get_ema_decay
(
self
):
def
_get_ema_decay
(
self
):
with
default_main_program
().
_lr_schedule_guard
():
with
default_main_program
().
_lr_schedule_guard
():
...
@@ -4261,9 +4259,9 @@ class ExponentialMovingAverage:
...
@@ -4261,9 +4259,9 @@ class ExponentialMovingAverage:
decay_t
=
(
self
.
_thres_steps
+
1.0
)
/
(
self
.
_thres_steps
+
10.0
)
decay_t
=
(
self
.
_thres_steps
+
1.0
)
/
(
self
.
_thres_steps
+
10.0
)
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
layers
.
control_flow
.
Switch
()
as
switch
:
with
switch
.
case
(
decay_t
<
self
.
_decay
):
with
switch
.
case
(
decay_t
<
self
.
_decay
):
layers
.
tensor
.
assign
(
decay_t
,
decay_var
)
paddle
.
assign
(
decay_t
,
decay_var
)
with
switch
.
default
():
with
switch
.
default
():
layers
.
tensor
.
assign
(
paddle
.
assign
(
np
.
array
([
self
.
_decay
],
dtype
=
np
.
float32
),
decay_var
np
.
array
([
self
.
_decay
],
dtype
=
np
.
float32
),
decay_var
)
)
return
decay_var
return
decay_var
...
@@ -4276,7 +4274,7 @@ class ExponentialMovingAverage:
...
@@ -4276,7 +4274,7 @@ class ExponentialMovingAverage:
dtype
=
'int64'
,
dtype
=
'int64'
,
persistable
=
True
,
persistable
=
True
,
)
)
global_step
=
layers
.
cast
(
global_step
,
"float32"
)
global_step
=
paddle
.
cast
(
global_step
,
"float32"
)
decay_var
=
block
.
_clone_variable
(
self
.
_decay_var
)
decay_var
=
block
.
_clone_variable
(
self
.
_decay_var
)
decay_pow_acc
=
paddle
.
pow
(
decay_var
,
global_step
)
decay_pow_acc
=
paddle
.
pow
(
decay_var
,
global_step
)
return
decay_pow_acc
,
global_step
return
decay_pow_acc
,
global_step
...
@@ -4313,7 +4311,7 @@ class ExponentialMovingAverage:
...
@@ -4313,7 +4311,7 @@ class ExponentialMovingAverage:
ema_t
=
param_ema
*
self
.
_decay_var
+
param
*
(
ema_t
=
param_ema
*
self
.
_decay_var
+
param
*
(
1
-
self
.
_decay_var
1
-
self
.
_decay_var
)
)
layers
.
assign
(
input
=
ema_t
,
output
=
param_ema
)
paddle
.
assign
(
ema_t
,
output
=
param_ema
)
# for fp16 params
# for fp16 params
for
param_ema
,
master_ema
in
param_master_emas
:
for
param_ema
,
master_ema
in
param_master_emas
:
...
@@ -7272,7 +7270,7 @@ class LookaheadOptimizer:
...
@@ -7272,7 +7270,7 @@ class LookaheadOptimizer:
for
param_name
in
params
:
for
param_name
in
params
:
fast_var
=
main_block
.
var
(
param_name
)
fast_var
=
main_block
.
var
(
param_name
)
slow_var
=
param_to_slow
[
param_name
]
slow_var
=
param_to_slow
[
param_name
]
layers
.
assign
(
input
=
fast_var
,
output
=
slow_var
)
paddle
.
assign
(
fast_var
,
output
=
slow_var
)
with
switch
.
case
(
mod
==
zero_var
):
with
switch
.
case
(
mod
==
zero_var
):
for
param_name
in
params
:
for
param_name
in
params
:
fast_var
=
main_block
.
var
(
param_name
)
fast_var
=
main_block
.
var
(
param_name
)
...
@@ -7283,8 +7281,8 @@ class LookaheadOptimizer:
...
@@ -7283,8 +7281,8 @@ class LookaheadOptimizer:
slow_var
,
paddle
.
subtract
(
one_var
,
alpha
)
slow_var
,
paddle
.
subtract
(
one_var
,
alpha
)
),
),
)
)
layers
.
assign
(
input
=
tmp_var
,
output
=
slow_var
)
paddle
.
assign
(
tmp_var
,
output
=
slow_var
)
layers
.
assign
(
input
=
tmp_var
,
output
=
fast_var
)
paddle
.
assign
(
tmp_var
,
output
=
fast_var
)
with
switch
.
default
():
with
switch
.
default
():
pass
pass
return
mini_out
return
mini_out
...
...
python/paddle/fluid/tests/book/test_recommender_system.py
浏览文件 @
b85af464
...
@@ -91,8 +91,8 @@ def get_usr_combined_features():
...
@@ -91,8 +91,8 @@ def get_usr_combined_features():
usr_job_fc
=
paddle
.
static
.
nn
.
fc
(
x
=
usr_job_emb
,
size
=
16
)
usr_job_fc
=
paddle
.
static
.
nn
.
fc
(
x
=
usr_job_emb
,
size
=
16
)
concat_embed
=
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
usr_fc
,
usr_gender_fc
,
usr_age_fc
,
usr_job_fc
],
axis
=
1
[
usr_fc
,
usr_gender_fc
,
usr_age_fc
,
usr_job_fc
],
axis
=
1
)
)
usr_combined_features
=
paddle
.
static
.
nn
.
fc
(
usr_combined_features
=
paddle
.
static
.
nn
.
fc
(
...
@@ -150,8 +150,8 @@ def get_mov_combined_features():
...
@@ -150,8 +150,8 @@ def get_mov_combined_features():
pool_type
=
"sum"
,
pool_type
=
"sum"
,
)
)
concat_embed
=
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
mov_fc
,
mov_categories_hidden
,
mov_title_conv
],
axis
=
1
[
mov_fc
,
mov_categories_hidden
,
mov_title_conv
],
axis
=
1
)
)
# FIXME(dzh) : need tanh operator
# FIXME(dzh) : need tanh operator
...
...
python/paddle/fluid/tests/book/test_word2vec_book.py
浏览文件 @
b85af464
...
@@ -87,8 +87,8 @@ def train(
...
@@ -87,8 +87,8 @@ def train(
param_attr
=
'shared_w'
,
param_attr
=
'shared_w'
,
)
)
concat_embed
=
fluid
.
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
)
)
hidden1
=
paddle
.
static
.
nn
.
fc
(
hidden1
=
paddle
.
static
.
nn
.
fc
(
x
=
concat_embed
,
size
=
HIDDEN_SIZE
,
activation
=
'sigmoid'
x
=
concat_embed
,
size
=
HIDDEN_SIZE
,
activation
=
'sigmoid'
...
...
python/paddle/fluid/tests/unittests/auto_parallel/test_fp16_assign.py
浏览文件 @
b85af464
...
@@ -58,7 +58,7 @@ def make_program():
...
@@ -58,7 +58,7 @@ def make_program():
)
)
where_1
=
paddle
.
where
(
y
>
1
,
y
,
out1
)
where_1
=
paddle
.
where
(
y
>
1
,
y
,
out1
)
paddle
.
fluid
.
layers
.
assign
(
where_1
,
where_0
)
paddle
.
assign
(
where_1
,
where_0
)
return
main_program
,
start_program
return
main_program
,
start_program
...
...
python/paddle/fluid/tests/unittests/check_nan_inf_base.py
浏览文件 @
b85af464
...
@@ -53,7 +53,7 @@ def net():
...
@@ -53,7 +53,7 @@ def net():
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
# test float16 value
# test float16 value
fp16_zero
=
fluid
.
layers
.
cast
(
zero
,
dtype
=
'float16'
)
fp16_zero
=
paddle
.
cast
(
zero
,
dtype
=
'float16'
)
y
=
y
+
zero
y
=
y
+
zero
...
...
python/paddle/fluid/tests/unittests/collective/collective_sendrecv_op_array.py
浏览文件 @
b85af464
...
@@ -35,19 +35,11 @@ class TestCollectiveSendRecv(TestCollectiveRunnerBase):
...
@@ -35,19 +35,11 @@ class TestCollectiveSendRecv(TestCollectiveRunnerBase):
)
)
tindata
.
desc
.
set_need_check_feed
(
False
)
tindata
.
desc
.
set_need_check_feed
(
False
)
if
self
.
rank
==
0
:
if
self
.
rank
==
0
:
data1
=
fluid
.
layers
.
assign
(
data1
=
paddle
.
assign
(
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
))
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
)
data2
=
paddle
.
assign
(
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
))
)
data2
=
fluid
.
layers
.
assign
(
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
)
)
elif
self
.
rank
==
1
:
elif
self
.
rank
==
1
:
data1
=
fluid
.
layers
.
assign
(
data1
=
paddle
.
assign
(
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
))
np
.
array
([[
3
,
4
,
5
]],
dtype
=
'float32'
)
data2
=
paddle
.
assign
(
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
))
)
data2
=
fluid
.
layers
.
assign
(
np
.
array
([[
0
,
1
,
2
]],
dtype
=
'float32'
)
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
paddle
.
tensor
.
array_write
(
data1
,
i
,
tensor_array
)
paddle
.
tensor
.
array_write
(
data1
,
i
,
tensor_array
)
...
...
python/paddle/fluid/tests/unittests/collective/fleet/hybrid_parallel_inference_helper.py
浏览文件 @
b85af464
...
@@ -122,19 +122,19 @@ class TestHybridParallelInferenceHelperClass(unittest.TestCase):
...
@@ -122,19 +122,19 @@ class TestHybridParallelInferenceHelperClass(unittest.TestCase):
# update cond and assign to cond_int, we will sync cond_int
# update cond and assign to cond_int, we will sync cond_int
paddle
.
assign
(
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
),
cond
)
paddle
.
assign
(
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
),
cond
)
layers
.
assign
(
layers
.
cast
(
cond
,
dtype
=
"int32"
),
cond_int
)
paddle
.
assign
(
paddle
.
cast
(
cond
,
dtype
=
"int32"
),
cond_int
)
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
# the code below must at end of while block and exists in device:all
# the code below must at end of while block and exists in device:all
layers
.
assign
(
layers
.
cast
(
cond_int
,
dtype
=
'bool'
),
cond
)
paddle
.
assign
(
paddle
.
cast
(
cond_int
,
dtype
=
'bool'
),
cond
)
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
out
=
paddle
.
tensor
.
create_array
(
data
.
dtype
)
out
=
paddle
.
tensor
.
create_array
(
data
.
dtype
)
layers
.
assign
(
data
,
out
)
paddle
.
assign
(
data
,
out
)
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
with
paddle
.
fluid
.
device_guard
(
f
'
{
device
}
:all'
):
# use a empty lod_tensor_array to clear lod_tensor_array
# use a empty lod_tensor_array to clear lod_tensor_array
layers
.
assign
(
paddle
.
tensor
.
create_array
(
data
.
dtype
),
data
)
paddle
.
assign
(
paddle
.
tensor
.
create_array
(
data
.
dtype
),
data
)
helper
=
HybridParallelInferenceHelper
(
helper
=
HybridParallelInferenceHelper
(
startup_program
,
startup_program
,
...
...
python/paddle/fluid/tests/unittests/dist_ctr.py
浏览文件 @
b85af464
...
@@ -95,7 +95,7 @@ class TestDistCTR2x2(TestDistRunnerBase):
...
@@ -95,7 +95,7 @@ class TestDistCTR2x2(TestDistRunnerBase):
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
...
...
python/paddle/fluid/tests/unittests/dist_fleet_ctr.py
浏览文件 @
b85af464
...
@@ -144,7 +144,7 @@ class TestDistCTR2x2(FleetDistRunnerBase):
...
@@ -144,7 +144,7 @@ class TestDistCTR2x2(FleetDistRunnerBase):
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
...
...
python/paddle/fluid/tests/unittests/dist_fleet_heter_pipeline_ctr.py
浏览文件 @
b85af464
...
@@ -116,8 +116,8 @@ class TestHeterPipelinePsCTR2x2(FleetDistHeterRunnerBase):
...
@@ -116,8 +116,8 @@ class TestHeterPipelinePsCTR2x2(FleetDistHeterRunnerBase):
dnn_out
=
fc
dnn_out
=
fc
with
fluid
.
device_guard
(
"cpu"
):
with
fluid
.
device_guard
(
"cpu"
):
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"int64"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"int64"
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/dist_fleet_simnet_bow.py
浏览文件 @
b85af464
...
@@ -55,7 +55,7 @@ def fake_simnet_reader():
...
@@ -55,7 +55,7 @@ def fake_simnet_reader():
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/dist_fleet_sparse_embedding_ctr.py
浏览文件 @
b85af464
...
@@ -134,7 +134,8 @@ class TestDistCTR2x2(FleetDistRunnerBase):
...
@@ -134,7 +134,8 @@ class TestDistCTR2x2(FleetDistRunnerBase):
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
([
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/dist_transformer.py
浏览文件 @
b85af464
...
@@ -1193,8 +1193,8 @@ def multi_head_attention(
...
@@ -1193,8 +1193,8 @@ def multi_head_attention(
q
,
k
,
v
=
__compute_qkv
(
queries
,
keys
,
values
,
n_head
,
d_key
,
d_value
)
q
,
k
,
v
=
__compute_qkv
(
queries
,
keys
,
values
,
n_head
,
d_key
,
d_value
)
if
cache
is
not
None
:
# use cache and concat time steps
if
cache
is
not
None
:
# use cache and concat time steps
k
=
cache
[
"k"
]
=
layers
.
concat
([
cache
[
"k"
],
k
],
axis
=
1
)
k
=
cache
[
"k"
]
=
paddle
.
concat
([
cache
[
"k"
],
k
],
axis
=
1
)
v
=
cache
[
"v"
]
=
layers
.
concat
([
cache
[
"v"
],
v
],
axis
=
1
)
v
=
cache
[
"v"
]
=
paddle
.
concat
([
cache
[
"v"
],
v
],
axis
=
1
)
q
=
__split_heads
(
q
,
n_head
)
q
=
__split_heads
(
q
,
n_head
)
k
=
__split_heads
(
k
,
n_head
)
k
=
__split_heads
(
k
,
n_head
)
...
@@ -1858,11 +1858,11 @@ def fast_decode(
...
@@ -1858,11 +1858,11 @@ def fast_decode(
# update states
# update states
layers
.
array_write
(
selected_ids
,
i
=
step_idx
,
array
=
ids
)
layers
.
array_write
(
selected_ids
,
i
=
step_idx
,
array
=
ids
)
layers
.
array_write
(
selected_scores
,
i
=
step_idx
,
array
=
scores
)
layers
.
array_write
(
selected_scores
,
i
=
step_idx
,
array
=
scores
)
layers
.
assign
(
pre_src_attn_bias
,
trg_src_attn_bias
)
paddle
.
assign
(
pre_src_attn_bias
,
trg_src_attn_bias
)
layers
.
assign
(
pre_enc_output
,
enc_output
)
paddle
.
assign
(
pre_enc_output
,
enc_output
)
for
i
in
range
(
n_layer
):
for
i
in
range
(
n_layer
):
layers
.
assign
(
pre_caches
[
i
][
"k"
],
caches
[
i
][
"k"
])
paddle
.
assign
(
pre_caches
[
i
][
"k"
],
caches
[
i
][
"k"
])
layers
.
assign
(
pre_caches
[
i
][
"v"
],
caches
[
i
][
"v"
])
paddle
.
assign
(
pre_caches
[
i
][
"v"
],
caches
[
i
][
"v"
])
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
finish_cond
=
paddle
.
logical_not
(
layers
.
is_empty
(
x
=
selected_ids
))
finish_cond
=
paddle
.
logical_not
(
layers
.
is_empty
(
x
=
selected_ids
))
paddle
.
logical_and
(
x
=
length_cond
,
y
=
finish_cond
,
out
=
cond
)
paddle
.
logical_and
(
x
=
length_cond
,
y
=
finish_cond
,
out
=
cond
)
...
...
python/paddle/fluid/tests/unittests/dist_word2vec.py
浏览文件 @
b85af464
...
@@ -75,8 +75,8 @@ class TestDistWord2vec2x2(TestDistRunnerBase):
...
@@ -75,8 +75,8 @@ class TestDistWord2vec2x2(TestDistRunnerBase):
),
),
)
)
concat_embed
=
fluid
.
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
,
axis
=
1
,
)
)
hidden1
=
paddle
.
static
.
nn
.
fc
(
hidden1
=
paddle
.
static
.
nn
.
fc
(
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/bert_dygraph_model.py
浏览文件 @
b85af464
...
@@ -384,7 +384,7 @@ class PretrainModelLayer(Layer):
...
@@ -384,7 +384,7 @@ class PretrainModelLayer(Layer):
mask_pos
,
mask_pos
,
labels
,
labels
,
):
):
mask_pos
=
fluid
.
layers
.
cast
(
x
=
mask_pos
,
dtype
=
'int32'
)
mask_pos
=
paddle
.
cast
(
x
=
mask_pos
,
dtype
=
'int32'
)
enc_output
,
next_sent_feat
=
self
.
bert_layer
(
enc_output
,
next_sent_feat
=
self
.
bert_layer
(
src_ids
,
position_ids
,
sentence_ids
,
input_mask
src_ids
,
position_ids
,
sentence_ids
,
input_mask
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/seq2seq_dygraph_model.py
浏览文件 @
b85af464
...
@@ -18,7 +18,7 @@ from seq2seq_utils import Seq2SeqModelHyperParams as args
...
@@ -18,7 +18,7 @@ from seq2seq_utils import Seq2SeqModelHyperParams as args
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
from
paddle.fluid
import
ParamAttr
,
layers
from
paddle.fluid
import
ParamAttr
from
paddle.fluid.dygraph
import
Layer
from
paddle.fluid.dygraph
import
Layer
from
paddle.fluid.dygraph.base
import
to_variable
from
paddle.fluid.dygraph.base
import
to_variable
from
paddle.jit.api
import
to_static
from
paddle.jit.api
import
to_static
...
@@ -67,7 +67,7 @@ class BasicLSTMUnit(Layer):
...
@@ -67,7 +67,7 @@ class BasicLSTMUnit(Layer):
)
)
def
forward
(
self
,
input
,
pre_hidden
,
pre_cell
):
def
forward
(
self
,
input
,
pre_hidden
,
pre_cell
):
concat_input_hidden
=
layers
.
concat
([
input
,
pre_hidden
],
1
)
concat_input_hidden
=
paddle
.
concat
([
input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
concat_input_hidden
,
y
=
self
.
_weight
)
gate_input
=
paddle
.
matmul
(
x
=
concat_input_hidden
,
y
=
self
.
_weight
)
gate_input
=
paddle
.
add
(
gate_input
,
self
.
_bias
)
gate_input
=
paddle
.
add
(
gate_input
,
self
.
_bias
)
...
@@ -488,12 +488,12 @@ class BaseModel(fluid.dygraph.Layer):
...
@@ -488,12 +488,12 @@ class BaseModel(fluid.dygraph.Layer):
self
.
_gather
(
x
,
beam_indices
,
batch_pos
)
for
x
in
new_dec_cell
self
.
_gather
(
x
,
beam_indices
,
batch_pos
)
for
x
in
new_dec_cell
]
]
next_finished
=
self
.
_gather
(
beam_finished
,
beam_indices
,
batch_pos
)
next_finished
=
self
.
_gather
(
beam_finished
,
beam_indices
,
batch_pos
)
next_finished
=
fluid
.
layers
.
cast
(
next_finished
,
"bool"
)
next_finished
=
paddle
.
cast
(
next_finished
,
"bool"
)
next_finished
=
paddle
.
logical_or
(
next_finished
=
paddle
.
logical_or
(
next_finished
,
next_finished
,
paddle
.
equal
(
token_indices
,
end_token_tensor
),
paddle
.
equal
(
token_indices
,
end_token_tensor
),
)
)
next_finished
=
fluid
.
layers
.
cast
(
next_finished
,
"float32"
)
next_finished
=
paddle
.
cast
(
next_finished
,
"float32"
)
dec_hidden
,
dec_cell
=
new_dec_hidden
,
new_dec_cell
dec_hidden
,
dec_cell
=
new_dec_hidden
,
new_dec_cell
beam_finished
=
next_finished
beam_finished
=
next_finished
...
@@ -808,7 +808,7 @@ class AttentionModel(fluid.dygraph.Layer):
...
@@ -808,7 +808,7 @@ class AttentionModel(fluid.dygraph.Layer):
for
step_idx
in
range
(
max_seq_len
):
for
step_idx
in
range
(
max_seq_len
):
j
=
step_idx
+
0
j
=
step_idx
+
0
step_input
=
tar_emb
[
j
]
step_input
=
tar_emb
[
j
]
step_input
=
fluid
.
layers
.
concat
([
step_input
,
input_feed
],
1
)
step_input
=
paddle
.
concat
([
step_input
,
input_feed
],
1
)
new_dec_hidden
,
new_dec_cell
=
[],
[]
new_dec_hidden
,
new_dec_cell
=
[],
[]
for
i
in
range
(
self
.
num_layers
):
for
i
in
range
(
self
.
num_layers
):
new_hidden
,
new_cell
=
self
.
dec_units
[
i
](
new_hidden
,
new_cell
=
self
.
dec_units
[
i
](
...
@@ -826,7 +826,7 @@ class AttentionModel(fluid.dygraph.Layer):
...
@@ -826,7 +826,7 @@ class AttentionModel(fluid.dygraph.Layer):
step_input
=
new_hidden
step_input
=
new_hidden
dec_att
=
self
.
attention
(
step_input
,
enc_outputs
,
enc_padding_mask
)
dec_att
=
self
.
attention
(
step_input
,
enc_outputs
,
enc_padding_mask
)
dec_att
=
paddle
.
squeeze
(
dec_att
,
[
1
])
dec_att
=
paddle
.
squeeze
(
dec_att
,
[
1
])
concat_att_out
=
fluid
.
layers
.
concat
([
dec_att
,
step_input
],
1
)
concat_att_out
=
paddle
.
concat
([
dec_att
,
step_input
],
1
)
out
=
self
.
concat_fc
(
concat_att_out
)
out
=
self
.
concat_fc
(
concat_att_out
)
input_feed
=
out
input_feed
=
out
dec_output
.
append
(
out
)
dec_output
.
append
(
out
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/simnet_dygraph_model.py
浏览文件 @
b85af464
...
@@ -97,7 +97,7 @@ class ConcatLayer:
...
@@ -97,7 +97,7 @@ class ConcatLayer:
"""
"""
operation
operation
"""
"""
concat
=
fluid
.
layers
.
concat
(
inputs
,
axis
=
self
.
axis
)
concat
=
paddle
.
concat
(
inputs
,
axis
=
self
.
axis
)
return
concat
return
concat
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ast_util.py
浏览文件 @
b85af464
...
@@ -68,7 +68,7 @@ class TestAST2Func(unittest.TestCase):
...
@@ -68,7 +68,7 @@ class TestAST2Func(unittest.TestCase):
x_data
=
np
.
random
.
random
([
10
,
16
]).
astype
(
'float32'
)
x_data
=
np
.
random
.
random
([
10
,
16
]).
astype
(
'float32'
)
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x_v
=
fluid
.
layers
.
assign
(
x_data
)
x_v
=
paddle
.
assign
(
x_data
)
true_ret
=
func
(
x_v
)
true_ret
=
func
(
x_v
)
test_ret
=
self
.
_ast2func
(
func
)(
x_v
)
test_ret
=
self
.
_ast2func
(
func
)(
x_v
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_basic_api_transformation.py
浏览文件 @
b85af464
...
@@ -252,7 +252,7 @@ class TestDygraphBasicApi(unittest.TestCase):
...
@@ -252,7 +252,7 @@ class TestDygraphBasicApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
main_program
.
random_seed
=
SEED
main_program
.
random_seed
=
SEED
with
fluid
.
program_guard
(
main_program
,
startup_program
):
with
fluid
.
program_guard
(
main_program
,
startup_program
):
data
=
fluid
.
layers
.
assign
(
self
.
input
)
data
=
paddle
.
assign
(
self
.
input
)
static_out
=
dygraph_to_static_func
(
self
.
dygraph_func
)(
data
)
static_out
=
dygraph_to_static_func
(
self
.
dygraph_func
)(
data
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_bmn.py
浏览文件 @
b85af464
...
@@ -310,7 +310,7 @@ def bmn_loss_func(
...
@@ -310,7 +310,7 @@ def bmn_loss_func(
self_bm_mask
=
paddle
.
static
.
create_global_var
(
self_bm_mask
=
paddle
.
static
.
create_global_var
(
shape
=
[
dscale
,
tscale
],
value
=
0
,
dtype
=
DATATYPE
,
persistable
=
True
shape
=
[
dscale
,
tscale
],
value
=
0
,
dtype
=
DATATYPE
,
persistable
=
True
)
)
fluid
.
layers
.
assign
(
bm_mask
,
self_bm_mask
)
paddle
.
assign
(
bm_mask
,
self_bm_mask
)
self_bm_mask
.
stop_gradient
=
True
self_bm_mask
.
stop_gradient
=
True
return
self_bm_mask
return
self_bm_mask
...
@@ -319,9 +319,9 @@ def bmn_loss_func(
...
@@ -319,9 +319,9 @@ def bmn_loss_func(
pred_score
=
paddle
.
reshape
(
x
=
pred_score
,
shape
=
[
-
1
])
pred_score
=
paddle
.
reshape
(
x
=
pred_score
,
shape
=
[
-
1
])
gt_label
=
paddle
.
reshape
(
x
=
gt_label
,
shape
=
[
-
1
])
gt_label
=
paddle
.
reshape
(
x
=
gt_label
,
shape
=
[
-
1
])
gt_label
.
stop_gradient
=
True
gt_label
.
stop_gradient
=
True
pmask
=
fluid
.
layers
.
cast
(
x
=
(
gt_label
>
0.5
),
dtype
=
DATATYPE
)
pmask
=
paddle
.
cast
(
x
=
(
gt_label
>
0.5
),
dtype
=
DATATYPE
)
num_entries
=
fluid
.
layers
.
cast
(
paddle
.
shape
(
pmask
),
dtype
=
DATATYPE
)
num_entries
=
paddle
.
cast
(
paddle
.
shape
(
pmask
),
dtype
=
DATATYPE
)
num_positive
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
pmask
),
dtype
=
DATATYPE
)
num_positive
=
paddle
.
cast
(
paddle
.
sum
(
pmask
),
dtype
=
DATATYPE
)
ratio
=
num_entries
/
num_positive
ratio
=
num_entries
/
num_positive
coef_0
=
0.5
*
ratio
/
(
ratio
-
1
)
coef_0
=
0.5
*
ratio
/
(
ratio
-
1
)
coef_1
=
0.5
*
ratio
coef_1
=
0.5
*
ratio
...
@@ -345,34 +345,34 @@ def bmn_loss_func(
...
@@ -345,34 +345,34 @@ def bmn_loss_func(
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
u_hmask
=
fluid
.
layers
.
cast
(
x
=
gt_iou_map
>
0.7
,
dtype
=
DATATYPE
)
u_hmask
=
paddle
.
cast
(
x
=
gt_iou_map
>
0.7
,
dtype
=
DATATYPE
)
u_mmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.7
,
gt_iou_map
>
0.3
)
u_mmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.7
,
gt_iou_map
>
0.3
)
u_mmask
=
fluid
.
layers
.
cast
(
x
=
u_mmask
,
dtype
=
DATATYPE
)
u_mmask
=
paddle
.
cast
(
x
=
u_mmask
,
dtype
=
DATATYPE
)
u_lmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.3
,
gt_iou_map
>=
0.0
)
u_lmask
=
paddle
.
logical_and
(
gt_iou_map
<=
0.3
,
gt_iou_map
>=
0.0
)
u_lmask
=
fluid
.
layers
.
cast
(
x
=
u_lmask
,
dtype
=
DATATYPE
)
u_lmask
=
paddle
.
cast
(
x
=
u_lmask
,
dtype
=
DATATYPE
)
u_lmask
=
paddle
.
multiply
(
u_lmask
,
mask
)
u_lmask
=
paddle
.
multiply
(
u_lmask
,
mask
)
num_h
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
u_hmask
),
dtype
=
DATATYPE
)
num_h
=
paddle
.
cast
(
paddle
.
sum
(
u_hmask
),
dtype
=
DATATYPE
)
num_m
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
u_mmask
),
dtype
=
DATATYPE
)
num_m
=
paddle
.
cast
(
paddle
.
sum
(
u_mmask
),
dtype
=
DATATYPE
)
num_l
=
fluid
.
layers
.
cast
(
paddle
.
sum
(
u_lmask
),
dtype
=
DATATYPE
)
num_l
=
paddle
.
cast
(
paddle
.
sum
(
u_lmask
),
dtype
=
DATATYPE
)
r_m
=
num_h
/
num_m
r_m
=
num_h
/
num_m
u_smmask
=
fluid
.
layers
.
assign
(
u_smmask
=
paddle
.
assign
(
local_random
.
uniform
(
local_random
.
uniform
(
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
).
astype
(
DATATYPE
)
).
astype
(
DATATYPE
)
)
)
u_smmask
=
paddle
.
multiply
(
u_mmask
,
u_smmask
)
u_smmask
=
paddle
.
multiply
(
u_mmask
,
u_smmask
)
u_smmask
=
fluid
.
layers
.
cast
(
x
=
(
u_smmask
>
(
1.0
-
r_m
)),
dtype
=
DATATYPE
)
u_smmask
=
paddle
.
cast
(
x
=
(
u_smmask
>
(
1.0
-
r_m
)),
dtype
=
DATATYPE
)
r_l
=
num_h
/
num_l
r_l
=
num_h
/
num_l
u_slmask
=
fluid
.
layers
.
assign
(
u_slmask
=
paddle
.
assign
(
local_random
.
uniform
(
local_random
.
uniform
(
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
0.0
,
1.0
,
[
gt_iou_map
.
shape
[
1
],
gt_iou_map
.
shape
[
2
]]
).
astype
(
DATATYPE
)
).
astype
(
DATATYPE
)
)
)
u_slmask
=
paddle
.
multiply
(
u_lmask
,
u_slmask
)
u_slmask
=
paddle
.
multiply
(
u_lmask
,
u_slmask
)
u_slmask
=
fluid
.
layers
.
cast
(
x
=
(
u_slmask
>
(
1.0
-
r_l
)),
dtype
=
DATATYPE
)
u_slmask
=
paddle
.
cast
(
x
=
(
u_slmask
>
(
1.0
-
r_l
)),
dtype
=
DATATYPE
)
weights
=
u_hmask
+
u_smmask
+
u_slmask
weights
=
u_hmask
+
u_smmask
+
u_slmask
weights
.
stop_gradient
=
True
weights
.
stop_gradient
=
True
...
@@ -385,8 +385,8 @@ def bmn_loss_func(
...
@@ -385,8 +385,8 @@ def bmn_loss_func(
def
pem_cls_loss_func
(
pred_score
,
gt_iou_map
,
mask
):
def
pem_cls_loss_func
(
pred_score
,
gt_iou_map
,
mask
):
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
gt_iou_map
=
paddle
.
multiply
(
gt_iou_map
,
mask
)
gt_iou_map
.
stop_gradient
=
True
gt_iou_map
.
stop_gradient
=
True
pmask
=
fluid
.
layers
.
cast
(
x
=
(
gt_iou_map
>
0.9
),
dtype
=
DATATYPE
)
pmask
=
paddle
.
cast
(
x
=
(
gt_iou_map
>
0.9
),
dtype
=
DATATYPE
)
nmask
=
fluid
.
layers
.
cast
(
x
=
(
gt_iou_map
<=
0.9
),
dtype
=
DATATYPE
)
nmask
=
paddle
.
cast
(
x
=
(
gt_iou_map
<=
0.9
),
dtype
=
DATATYPE
)
nmask
=
paddle
.
multiply
(
nmask
,
mask
)
nmask
=
paddle
.
multiply
(
nmask
,
mask
)
num_positive
=
paddle
.
sum
(
pmask
)
num_positive
=
paddle
.
sum
(
pmask
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_convert_call.py
浏览文件 @
b85af464
...
@@ -141,7 +141,7 @@ class MyConvLayer(fluid.dygraph.Layer):
...
@@ -141,7 +141,7 @@ class MyConvLayer(fluid.dygraph.Layer):
@
paddle
.
jit
.
to_static
@
paddle
.
jit
.
to_static
def
dymethod
(
self
,
x_v
):
def
dymethod
(
self
,
x_v
):
x_v
=
fluid
.
layers
.
assign
(
x_v
)
x_v
=
paddle
.
assign
(
x_v
)
return
x_v
return
x_v
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_lac.py
浏览文件 @
b85af464
...
@@ -86,7 +86,7 @@ class DynamicGRU(fluid.dygraph.Layer):
...
@@ -86,7 +86,7 @@ class DynamicGRU(fluid.dygraph.Layer):
if
self
.
is_reverse
:
if
self
.
is_reverse
:
res
=
res
[::
-
1
]
res
=
res
[::
-
1
]
res
=
fluid
.
layers
.
concat
(
res
,
axis
=
1
)
res
=
paddle
.
concat
(
res
,
axis
=
1
)
return
res
return
res
...
@@ -154,7 +154,7 @@ class BiGRU(fluid.dygraph.Layer):
...
@@ -154,7 +154,7 @@ class BiGRU(fluid.dygraph.Layer):
res_pre_gru_r
=
self
.
pre_gru_r
(
input_feature
)
res_pre_gru_r
=
self
.
pre_gru_r
(
input_feature
)
res_gru_r
=
self
.
gru_r
(
res_pre_gru_r
)
res_gru_r
=
self
.
gru_r
(
res_pre_gru_r
)
bi_merge
=
fluid
.
layers
.
concat
(
input
=
[
res_gru
,
res_gru_r
],
axis
=-
1
)
bi_merge
=
paddle
.
concat
(
[
res_gru
,
res_gru_r
],
axis
=-
1
)
return
bi_merge
return
bi_merge
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_list.py
浏览文件 @
b85af464
...
@@ -94,7 +94,7 @@ def test_list_append_in_for_loop_with_concat(x, iter_num):
...
@@ -94,7 +94,7 @@ def test_list_append_in_for_loop_with_concat(x, iter_num):
)
# TODO(liym27): Delete it if the type of parameter iter_num can be resolved
)
# TODO(liym27): Delete it if the type of parameter iter_num can be resolved
for
i
in
range
(
iter_num
):
for
i
in
range
(
iter_num
):
a
.
append
(
x
)
a
.
append
(
x
)
a
=
fluid
.
layers
.
concat
(
a
,
axis
=
0
)
a
=
paddle
.
concat
(
a
,
axis
=
0
)
return
a
return
a
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_ptb_lm.py
浏览文件 @
b85af464
...
@@ -89,7 +89,7 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -89,7 +89,7 @@ class SimpleLSTMRNN(fluid.Layer):
weight_1
=
self
.
weight_1_arr
[
k
]
weight_1
=
self
.
weight_1_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
nn
=
fluid
.
layers
.
concat
([
step_input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
step_input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -111,16 +111,16 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -111,16 +111,16 @@ class SimpleLSTMRNN(fluid.Layer):
mode
=
'upscale_in_train'
,
mode
=
'upscale_in_train'
,
)
)
res
.
append
(
step_input
)
res
.
append
(
step_input
)
real_res
=
fluid
.
layers
.
concat
(
res
,
1
)
real_res
=
paddle
.
concat
(
res
,
1
)
real_res
=
paddle
.
reshape
(
real_res
=
paddle
.
reshape
(
real_res
,
[
-
1
,
self
.
_num_steps
,
self
.
_hidden_size
]
real_res
,
[
-
1
,
self
.
_num_steps
,
self
.
_hidden_size
]
)
)
last_hidden
=
fluid
.
layers
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
fluid
.
layers
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_reinforcement_learning.py
浏览文件 @
b85af464
...
@@ -152,7 +152,7 @@ def train(args, place, to_static):
...
@@ -152,7 +152,7 @@ def train(args, place, to_static):
cur_loss
=
paddle
.
multiply
(
_R
,
log_prob
)
cur_loss
=
paddle
.
multiply
(
_R
,
log_prob
)
policy_loss
.
append
(
cur_loss
)
policy_loss
.
append
(
cur_loss
)
policy_loss
=
fluid
.
layers
.
concat
(
policy_loss
)
policy_loss
=
paddle
.
concat
(
policy_loss
)
policy_loss
=
paddle
.
sum
(
policy_loss
)
policy_loss
=
paddle
.
sum
(
policy_loss
)
policy_loss
.
backward
()
policy_loss
.
backward
()
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_sentiment.py
浏览文件 @
b85af464
...
@@ -248,8 +248,8 @@ class BiGRU(fluid.dygraph.Layer):
...
@@ -248,8 +248,8 @@ class BiGRU(fluid.dygraph.Layer):
gru_backward
=
self
.
_gru_backward
(
fc_1
)
gru_backward
=
self
.
_gru_backward
(
fc_1
)
gru_forward_tanh
=
paddle
.
tanh
(
gru_forward
)
gru_forward_tanh
=
paddle
.
tanh
(
gru_forward
)
gru_backward_tanh
=
paddle
.
tanh
(
gru_backward
)
gru_backward_tanh
=
paddle
.
tanh
(
gru_backward
)
encoded_vector
=
fluid
.
layers
.
concat
(
encoded_vector
=
paddle
.
concat
(
input
=
[
gru_forward_tanh
,
gru_backward_tanh
],
axis
=
2
[
gru_forward_tanh
,
gru_backward_tanh
],
axis
=
2
)
)
encoded_vector
=
paddle
.
max
(
encoded_vector
,
axis
=
1
)
encoded_vector
=
paddle
.
max
(
encoded_vector
,
axis
=
1
)
fc_2
=
self
.
_fc2
(
encoded_vector
)
fc_2
=
self
.
_fc2
(
encoded_vector
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/transformer_dygraph_model.py
浏览文件 @
b85af464
...
@@ -146,8 +146,8 @@ class MultiHeadAttention(Layer):
...
@@ -146,8 +146,8 @@ class MultiHeadAttention(Layer):
if
cache
is
not
None
:
if
cache
is
not
None
:
cache_k
,
cache_v
=
cache
[
"k"
],
cache
[
"v"
]
cache_k
,
cache_v
=
cache
[
"k"
],
cache
[
"v"
]
k
=
layers
.
concat
([
cache_k
,
k
],
axis
=
2
)
k
=
paddle
.
concat
([
cache_k
,
k
],
axis
=
2
)
v
=
layers
.
concat
([
cache_v
,
v
],
axis
=
2
)
v
=
paddle
.
concat
([
cache_v
,
v
],
axis
=
2
)
cache
[
"k"
],
cache
[
"v"
]
=
k
,
v
cache
[
"k"
],
cache
[
"v"
]
=
k
,
v
# scale dot product attention
# scale dot product attention
product
=
paddle
.
matmul
(
x
=
q
,
y
=
k
,
transpose_y
=
True
)
product
=
paddle
.
matmul
(
x
=
q
,
y
=
k
,
transpose_y
=
True
)
...
@@ -774,7 +774,7 @@ class Transformer(Layer):
...
@@ -774,7 +774,7 @@ class Transformer(Layer):
return
res
return
res
def
mask_probs
(
probs
,
finished
,
noend_mask_tensor
):
def
mask_probs
(
probs
,
finished
,
noend_mask_tensor
):
finished
=
layers
.
cast
(
finished
,
dtype
=
probs
.
dtype
)
finished
=
paddle
.
cast
(
finished
,
dtype
=
probs
.
dtype
)
probs
=
paddle
.
multiply
(
probs
=
paddle
.
multiply
(
paddle
.
expand
(
paddle
.
expand
(
paddle
.
unsqueeze
(
finished
,
[
2
]),
paddle
.
unsqueeze
(
finished
,
[
2
]),
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/yolov3.py
浏览文件 @
b85af464
...
@@ -207,7 +207,7 @@ class Upsample(fluid.dygraph.Layer):
...
@@ -207,7 +207,7 @@ class Upsample(fluid.dygraph.Layer):
shape_nchw
=
paddle
.
shape
(
inputs
)
shape_nchw
=
paddle
.
shape
(
inputs
)
shape_hw
=
paddle
.
slice
(
shape_nchw
,
axes
=
[
0
],
starts
=
[
2
],
ends
=
[
4
])
shape_hw
=
paddle
.
slice
(
shape_nchw
,
axes
=
[
0
],
starts
=
[
2
],
ends
=
[
4
])
shape_hw
.
stop_gradient
=
True
shape_hw
.
stop_gradient
=
True
in_shape
=
fluid
.
layers
.
cast
(
shape_hw
,
dtype
=
'int32'
)
in_shape
=
paddle
.
cast
(
shape_hw
,
dtype
=
'int32'
)
out_shape
=
in_shape
*
self
.
scale
out_shape
=
in_shape
*
self
.
scale
out_shape
.
stop_gradient
=
True
out_shape
.
stop_gradient
=
True
...
@@ -295,9 +295,7 @@ class YOLOv3(fluid.dygraph.Layer):
...
@@ -295,9 +295,7 @@ class YOLOv3(fluid.dygraph.Layer):
blocks
=
self
.
block
(
inputs
)
blocks
=
self
.
block
(
inputs
)
for
i
,
block
in
enumerate
(
blocks
):
for
i
,
block
in
enumerate
(
blocks
):
if
i
>
0
:
if
i
>
0
:
block
=
fluid
.
layers
.
concat
(
block
=
paddle
.
concat
([
route
,
block
],
axis
=
1
)
# noqa: F821
input
=
[
route
,
block
],
axis
=
1
# noqa: F821
)
route
,
tip
=
self
.
yolo_blocks
[
i
](
block
)
route
,
tip
=
self
.
yolo_blocks
[
i
](
block
)
block_out
=
self
.
block_outputs
[
i
](
tip
)
block_out
=
self
.
block_outputs
[
i
](
tip
)
self
.
outputs
.
append
(
block_out
)
self
.
outputs
.
append
(
block_out
)
...
@@ -349,8 +347,8 @@ class YOLOv3(fluid.dygraph.Layer):
...
@@ -349,8 +347,8 @@ class YOLOv3(fluid.dygraph.Layer):
if
not
self
.
is_train
:
if
not
self
.
is_train
:
# get pred
# get pred
yolo_boxes
=
fluid
.
layers
.
concat
(
self
.
boxes
,
axis
=
1
)
yolo_boxes
=
paddle
.
concat
(
self
.
boxes
,
axis
=
1
)
yolo_scores
=
fluid
.
layers
.
concat
(
self
.
scores
,
axis
=
2
)
yolo_scores
=
paddle
.
concat
(
self
.
scores
,
axis
=
2
)
pred
=
_legacy_C_ops
.
multiclass_nms
(
pred
=
_legacy_C_ops
.
multiclass_nms
(
bboxes
=
yolo_boxes
,
bboxes
=
yolo_boxes
,
...
...
python/paddle/fluid/tests/unittests/fleet_heter_ps_training.py
浏览文件 @
b85af464
...
@@ -107,8 +107,8 @@ def net(batch_size=4, lr=0.01):
...
@@ -107,8 +107,8 @@ def net(batch_size=4, lr=0.01):
)
)
dnn_out
=
fc
dnn_out
=
fc
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"int64"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"int64"
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/fleet_ps_training.py
浏览文件 @
b85af464
...
@@ -24,10 +24,10 @@ from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import (
...
@@ -24,10 +24,10 @@ from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import (
input_x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
-
1
,
32
],
dtype
=
'float32'
)
input_x
=
paddle
.
static
.
data
(
name
=
"x"
,
shape
=
[
-
1
,
32
],
dtype
=
'float32'
)
input_y
=
paddle
.
static
.
data
(
name
=
"y"
,
shape
=
[
-
1
,
1
],
dtype
=
'int64'
)
input_y
=
paddle
.
static
.
data
(
name
=
"y"
,
shape
=
[
-
1
,
1
],
dtype
=
'int64'
)
input_y
=
fluid
.
layers
.
cast
(
input_y
,
dtype
=
"float32"
)
input_y
=
paddle
.
cast
(
input_y
,
dtype
=
"float32"
)
with
fluid
.
device_guard
(
"gpu"
):
with
fluid
.
device_guard
(
"gpu"
):
input_y
=
fluid
.
layers
.
cast
(
input_y
,
dtype
=
"int64"
)
input_y
=
paddle
.
cast
(
input_y
,
dtype
=
"int64"
)
cost
=
mlp
(
input_x
,
input_y
)
cost
=
mlp
(
input_x
,
input_y
)
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.01
)
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.01
)
...
...
python/paddle/fluid/tests/unittests/ipu/test_arg_max_op_ipu.py
浏览文件 @
b85af464
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
x
=
paddle
.
static
.
data
(
x
=
paddle
.
static
.
data
(
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
)
)
out
=
paddle
.
fluid
.
layers
.
argmax
(
x
,
**
self
.
attrs
)
out
=
paddle
.
argmax
(
x
,
**
self
.
attrs
)
self
.
fetch_list
=
[
out
.
name
]
self
.
fetch_list
=
[
out
.
name
]
def
run_model
(
self
,
exec_mode
):
def
run_model
(
self
,
exec_mode
):
...
...
python/paddle/fluid/tests/unittests/ipu/test_arg_min_op_ipu.py
浏览文件 @
b85af464
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
...
@@ -47,7 +47,7 @@ class TestBase(IPUOpTest):
x
=
paddle
.
static
.
data
(
x
=
paddle
.
static
.
data
(
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
name
=
self
.
feed_list
[
0
],
shape
=
self
.
feed_shape
[
0
],
dtype
=
'float32'
)
)
out
=
paddle
.
fluid
.
layers
.
argmin
(
x
,
**
self
.
attrs
)
out
=
paddle
.
argmin
(
x
,
**
self
.
attrs
)
self
.
fetch_list
=
[
out
.
name
]
self
.
fetch_list
=
[
out
.
name
]
def
run_model
(
self
,
exec_mode
):
def
run_model
(
self
,
exec_mode
):
...
...
python/paddle/fluid/tests/unittests/ipu/test_concat_op_ipu.py
浏览文件 @
b85af464
...
@@ -56,7 +56,7 @@ class TestBase(IPUOpTest):
...
@@ -56,7 +56,7 @@ class TestBase(IPUOpTest):
y
=
paddle
.
static
.
data
(
y
=
paddle
.
static
.
data
(
name
=
self
.
feed_list
[
1
],
shape
=
self
.
feed_shape
[
1
],
dtype
=
'float32'
name
=
self
.
feed_list
[
1
],
shape
=
self
.
feed_shape
[
1
],
dtype
=
'float32'
)
)
out
=
paddle
.
fluid
.
layers
.
concat
([
x
,
y
],
**
self
.
attrs
)
out
=
paddle
.
concat
([
x
,
y
],
**
self
.
attrs
)
self
.
fetch_list
=
[
out
.
name
]
self
.
fetch_list
=
[
out
.
name
]
def
run_model
(
self
,
exec_mode
):
def
run_model
(
self
,
exec_mode
):
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_slice_plugin.py
浏览文件 @
b85af464
...
@@ -115,7 +115,7 @@ class SlicePluginTRTTestInt32(SlicePluginTRTTest):
...
@@ -115,7 +115,7 @@ class SlicePluginTRTTestInt32(SlicePluginTRTTest):
starts
=
self
.
params_starts
starts
=
self
.
params_starts
ends
=
self
.
params_ends
ends
=
self
.
params_ends
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
cast_out
=
fluid
.
layers
.
cast
(
slice_out
,
'float32'
)
cast_out
=
paddle
.
cast
(
slice_out
,
'float32'
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
self
.
feeds
=
{
self
.
feeds
=
{
...
@@ -140,7 +140,7 @@ class StaticSlicePluginTRTTestInt32(SlicePluginTRTTest):
...
@@ -140,7 +140,7 @@ class StaticSlicePluginTRTTestInt32(SlicePluginTRTTest):
starts
=
self
.
params_starts
starts
=
self
.
params_starts
ends
=
self
.
params_ends
ends
=
self
.
params_ends
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
slice_out
=
paddle
.
slice
(
data
,
axes
=
axes
,
starts
=
starts
,
ends
=
ends
)
cast_out
=
fluid
.
layers
.
cast
(
slice_out
,
'float32'
)
cast_out
=
paddle
.
cast
(
slice_out
,
'float32'
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
out
=
nn
.
batch_norm
(
cast_out
,
is_test
=
True
)
self
.
feeds
=
{
self
.
feeds
=
{
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_subgraph_pass.py
浏览文件 @
b85af464
...
@@ -61,7 +61,7 @@ class TensorRTSubgraphPassConcatTest(InferencePassTest):
...
@@ -61,7 +61,7 @@ class TensorRTSubgraphPassConcatTest(InferencePassTest):
data2
=
fluid
.
data
(
data2
=
fluid
.
data
(
name
=
"data2"
,
shape
=
[
-
1
,
3
,
64
,
64
],
dtype
=
"float32"
name
=
"data2"
,
shape
=
[
-
1
,
3
,
64
,
64
],
dtype
=
"float32"
)
)
concat_out
=
fluid
.
layers
.
concat
([
data1
,
data2
],
axis
=
2
)
concat_out
=
paddle
.
concat
([
data1
,
data2
],
axis
=
2
)
out
=
nn
.
batch_norm
(
concat_out
,
is_test
=
True
)
out
=
nn
.
batch_norm
(
concat_out
,
is_test
=
True
)
self
.
feeds
=
{
self
.
feeds
=
{
"data1"
:
np
.
random
.
random
([
1
,
3
,
64
,
64
]).
astype
(
"float32"
),
"data1"
:
np
.
random
.
random
([
1
,
3
,
64
,
64
]).
astype
(
"float32"
),
...
...
python/paddle/fluid/tests/unittests/ir/inference/test_trt_transpose_flatten_concat_fuse_pass.py
浏览文件 @
b85af464
...
@@ -38,7 +38,7 @@ class TransposeFlattenConcatFusePassTRTTest(InferencePassTest):
...
@@ -38,7 +38,7 @@ class TransposeFlattenConcatFusePassTRTTest(InferencePassTest):
flatt1
=
paddle
.
flatten
(
trans1
,
1
,
-
1
)
flatt1
=
paddle
.
flatten
(
trans1
,
1
,
-
1
)
flatt2
=
paddle
.
flatten
(
trans2
,
1
,
-
1
)
flatt2
=
paddle
.
flatten
(
trans2
,
1
,
-
1
)
concat_out
=
fluid
.
layers
.
concat
([
flatt1
,
flatt2
],
axis
=
1
)
concat_out
=
paddle
.
concat
([
flatt1
,
flatt2
],
axis
=
1
)
# There is no parameters for above structure.
# There is no parameters for above structure.
# Hence, append a batch_norm to avoid failure caused by load_combined.
# Hence, append a batch_norm to avoid failure caused by load_combined.
reshape_out
=
paddle
.
reshape
(
concat_out
,
[
-
1
,
0
,
1
,
1
])
reshape_out
=
paddle
.
reshape
(
concat_out
,
[
-
1
,
0
,
1
,
1
])
...
...
python/paddle/fluid/tests/unittests/ir/test_ir_fusion_group_pass.py
浏览文件 @
b85af464
...
@@ -113,7 +113,7 @@ class FusionGroupPassInplaceTest(FusionGroupPassTest):
...
@@ -113,7 +113,7 @@ class FusionGroupPassInplaceTest(FusionGroupPassTest):
# subgraph with 3 op node
# subgraph with 3 op node
tmp_0
=
self
.
feed_vars
[
0
]
-
self
.
feed_vars
[
1
]
tmp_0
=
self
.
feed_vars
[
0
]
-
self
.
feed_vars
[
1
]
tmp_1
=
tmp_0
*
self
.
feed_vars
[
2
]
tmp_1
=
tmp_0
*
self
.
feed_vars
[
2
]
tmp_2
=
layers
.
assign
(
tmp_1
,
output
=
tmp_0
)
tmp_2
=
paddle
.
assign
(
tmp_1
,
output
=
tmp_0
)
tmp_3
=
paddle
.
matmul
(
tmp_2
,
self
.
feed_vars
[
3
])
tmp_3
=
paddle
.
matmul
(
tmp_2
,
self
.
feed_vars
[
3
])
self
.
num_fused_ops
=
1
self
.
num_fused_ops
=
1
...
@@ -138,17 +138,17 @@ class FusionGroupPassTestCastAndFP16(FusionGroupPassTest):
...
@@ -138,17 +138,17 @@ class FusionGroupPassTestCastAndFP16(FusionGroupPassTest):
# subgraph with 2 op nodes
# subgraph with 2 op nodes
tmp_0
=
self
.
feed_vars
[
0
]
*
self
.
feed_vars
[
1
]
tmp_0
=
self
.
feed_vars
[
0
]
*
self
.
feed_vars
[
1
]
tmp_1
=
layers
.
cast
(
tmp_0
,
dtype
=
"float16"
)
tmp_1
=
paddle
.
cast
(
tmp_0
,
dtype
=
"float16"
)
zero
=
layers
.
fill_constant
(
shape
=
[
128
],
dtype
=
"float16"
,
value
=
0
)
zero
=
layers
.
fill_constant
(
shape
=
[
128
],
dtype
=
"float16"
,
value
=
0
)
# TODO(xreki): fix precision problem when using softmax of float16.
# TODO(xreki): fix precision problem when using softmax of float16.
# tmp_2 = layers.softmax(tmp_1)
# tmp_2 = layers.softmax(tmp_1)
tmp_2
=
paddle
.
add
(
tmp_1
,
zero
)
tmp_2
=
paddle
.
add
(
tmp_1
,
zero
)
tmp_3
=
paddle
.
matmul
(
tmp_0
,
self
.
feed_vars
[
2
])
tmp_3
=
paddle
.
matmul
(
tmp_0
,
self
.
feed_vars
[
2
])
# subgraph with 4 op nodes
# subgraph with 4 op nodes
tmp_3
=
layers
.
cast
(
tmp_2
,
dtype
=
"float16"
)
tmp_3
=
paddle
.
cast
(
tmp_2
,
dtype
=
"float16"
)
tmp_4
=
paddle
.
nn
.
functional
.
relu
(
tmp_1
+
tmp_3
)
tmp_4
=
paddle
.
nn
.
functional
.
relu
(
tmp_1
+
tmp_3
)
tmp_5
=
layers
.
cast
(
tmp_4
,
dtype
=
dtype
)
tmp_5
=
paddle
.
cast
(
tmp_4
,
dtype
=
dtype
)
tmp_3
=
layers
.
cast
(
tmp_2
,
dtype
=
dtype
)
tmp_3
=
paddle
.
cast
(
tmp_2
,
dtype
=
dtype
)
self
.
append_gradients
(
tmp_5
)
self
.
append_gradients
(
tmp_5
)
...
@@ -185,8 +185,8 @@ class FusionGroupPassCastTest(FusionGroupPassTest):
...
@@ -185,8 +185,8 @@ class FusionGroupPassCastTest(FusionGroupPassTest):
self
.
feed_vars
=
self
.
_prepare_feed_vars
([
2
,
2
],
dtype
,
2
)
self
.
feed_vars
=
self
.
_prepare_feed_vars
([
2
,
2
],
dtype
,
2
)
tmp_0
=
paddle
.
add
(
self
.
feed_vars
[
0
],
self
.
feed_vars
[
1
])
tmp_0
=
paddle
.
add
(
self
.
feed_vars
[
0
],
self
.
feed_vars
[
1
])
tmp_1
=
layers
.
cast
(
tmp_0
,
dtype
=
"float64"
)
tmp_1
=
paddle
.
cast
(
tmp_0
,
dtype
=
"float64"
)
tmp_2
=
layers
.
cast
(
tmp_1
,
dtype
=
"float32"
)
tmp_2
=
paddle
.
cast
(
tmp_1
,
dtype
=
"float32"
)
self
.
append_gradients
(
tmp_2
)
self
.
append_gradients
(
tmp_2
)
...
...
python/paddle/fluid/tests/unittests/mlu/sync_batch_norm_op_mlu.py
浏览文件 @
b85af464
...
@@ -84,7 +84,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
...
@@ -84,7 +84,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
use_cudnn
=
use_cudnn
,
use_cudnn
=
use_cudnn
,
)
)
if
self
.
bn_dtype
==
np
.
float16
:
if
self
.
bn_dtype
==
np
.
float16
:
conv
=
fluid
.
layers
.
cast
(
conv
,
'float16'
)
conv
=
paddle
.
cast
(
conv
,
'float16'
)
bn
=
paddle
.
static
.
nn
.
batch_norm
(
bn
=
paddle
.
static
.
nn
.
batch_norm
(
conv
,
conv
,
param_attr
=
fluid
.
ParamAttr
(
name
=
'bn_scale'
),
param_attr
=
fluid
.
ParamAttr
(
name
=
'bn_scale'
),
...
@@ -95,7 +95,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
...
@@ -95,7 +95,7 @@ class TestSyncBatchNormOpTraining(TestSyncBatchNormRunnerBase):
is_test
=
only_forward
,
is_test
=
only_forward
,
)
)
if
self
.
bn_dtype
==
np
.
float16
:
if
self
.
bn_dtype
==
np
.
float16
:
bn
=
fluid
.
layers
.
cast
(
bn
,
'float32'
)
bn
=
paddle
.
cast
(
bn
,
'float32'
)
sigmoid
=
paddle
.
nn
.
functional
.
sigmoid
(
bn
)
sigmoid
=
paddle
.
nn
.
functional
.
sigmoid
(
bn
)
out
=
paddle
.
sum
(
sigmoid
)
out
=
paddle
.
sum
(
sigmoid
)
# if not sync_bn:
# if not sync_bn:
...
...
python/paddle/fluid/tests/unittests/mlu/test_cast_op_mlu.py
浏览文件 @
b85af464
...
@@ -139,7 +139,7 @@ class TestCastOpError(unittest.TestCase):
...
@@ -139,7 +139,7 @@ class TestCastOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
MLUPlace
(
0
)
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
MLUPlace
(
0
)
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
cast
,
x1
,
'int32'
)
self
.
assertRaises
(
TypeError
,
paddle
.
cast
,
x1
,
'int32'
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
python/paddle/fluid/tests/unittests/mlu/test_one_hot_v2_op_mlu.py
浏览文件 @
b85af464
...
@@ -187,7 +187,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -187,7 +187,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
depth
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
depth
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/mlu/test_where_op_mlu.py
浏览文件 @
b85af464
...
@@ -344,7 +344,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
...
@@ -344,7 +344,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
y
=
paddle
.
where
(
x
)
y
=
paddle
.
where
(
x
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
2
)
self
.
assertEqual
(
len
(
y
),
2
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
@@ -357,7 +357,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
...
@@ -357,7 +357,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
y
=
paddle
.
where
(
x
)
y
=
paddle
.
where
(
x
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
1
)
self
.
assertEqual
(
len
(
y
),
1
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
exe
=
fluid
.
Executor
(
paddle
.
device
.
MLUPlace
(
0
))
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
...
python/paddle/fluid/tests/unittests/npu/test_assign_value_op_npu.py
浏览文件 @
b85af464
...
@@ -94,7 +94,7 @@ class TestAssignApi(unittest.TestCase):
...
@@ -94,7 +94,7 @@ class TestAssignApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
layers
.
assign
(
input
=
self
.
value
,
output
=
x
)
paddle
.
assign
(
self
.
value
,
output
=
x
)
exe
=
fluid
.
Executor
(
self
.
place
)
exe
=
fluid
.
Executor
(
self
.
place
)
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
...
...
python/paddle/fluid/tests/unittests/npu/test_concat_op_npu.py
浏览文件 @
b85af464
...
@@ -169,7 +169,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -169,7 +169,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
if
use_fluid_api
:
if
use_fluid_api
:
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
...
@@ -178,7 +178,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -178,7 +178,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
for
i
in
range
(
self
.
iter_num
):
for
i
in
range
(
self
.
iter_num
):
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
self
.
out_var
=
fluid
.
layers
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
self
.
out_var
=
paddle
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
else
:
else
:
self
.
program
=
paddle
.
static
.
Program
()
self
.
program
=
paddle
.
static
.
Program
()
with
paddle
.
static
.
program_guard
(
self
.
program
):
with
paddle
.
static
.
program_guard
(
self
.
program
):
...
...
python/paddle/fluid/tests/unittests/npu/test_one_hot_v2_op_npu.py
浏览文件 @
b85af464
...
@@ -210,7 +210,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -210,7 +210,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
depth
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
depth
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/npu/test_stack_op_npu.py
浏览文件 @
b85af464
...
@@ -137,7 +137,7 @@ class TestStackAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -137,7 +137,7 @@ class TestStackAPIWithLoDTensorArray(unittest.TestCase):
def
set_program
(
self
):
def
set_program
(
self
):
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
...
@@ -175,7 +175,7 @@ class TestTensorStackAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -175,7 +175,7 @@ class TestTensorStackAPIWithLoDTensorArray(unittest.TestCase):
def
set_program
(
self
):
def
set_program
(
self
):
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
...
...
python/paddle/fluid/tests/unittests/npu/test_while_op_npu.py
浏览文件 @
b85af464
...
@@ -38,7 +38,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -38,7 +38,7 @@ class TestWhileOp(unittest.TestCase):
)
)
# fill_constant npu op doesn't support int64
# fill_constant npu op doesn't support int64
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
cast
(
i
,
'int64'
)
i
=
paddle
.
cast
(
i
,
'int64'
)
i
.
stop_gradient
=
True
i
.
stop_gradient
=
True
init
=
layers
.
zeros
(
shape
=
[
10
],
dtype
=
'float32'
)
init
=
layers
.
zeros
(
shape
=
[
10
],
dtype
=
'float32'
)
mem_array
=
paddle
.
tensor
.
array_write
(
x
=
init
,
i
=
i
)
mem_array
=
paddle
.
tensor
.
array_write
(
x
=
init
,
i
=
i
)
...
@@ -48,28 +48,28 @@ class TestWhileOp(unittest.TestCase):
...
@@ -48,28 +48,28 @@ class TestWhileOp(unittest.TestCase):
i
=
paddle
.
increment
(
i
)
i
=
paddle
.
increment
(
i
)
paddle
.
tensor
.
array_write
(
d2
,
i
,
array
=
data_array
)
paddle
.
tensor
.
array_write
(
d2
,
i
,
array
=
data_array
)
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
zeros
(
shape
=
[
1
],
dtype
=
'int32'
)
i
=
layers
.
cast
(
i
,
'int64'
)
i
=
paddle
.
cast
(
i
,
'int64'
)
i
.
stop_gradient
=
True
i
.
stop_gradient
=
True
array_len
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
5
)
array_len
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
5
)
array_len
=
layers
.
cast
(
array_len
,
'int64'
)
array_len
=
paddle
.
cast
(
array_len
,
'int64'
)
array_len
.
stop_gradient
=
True
array_len
.
stop_gradient
=
True
cond
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond
=
layers
.
cast
(
cond
,
'bool'
)
cond
=
paddle
.
cast
(
cond
,
'bool'
)
j
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
1
)
j
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
1
)
j
=
layers
.
cast
(
j
,
'int64'
)
j
=
paddle
.
cast
(
j
,
'int64'
)
j
.
stop_gradient
=
True
j
.
stop_gradient
=
True
array_len2
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
3
)
array_len2
=
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int32'
,
value
=
3
)
array_len2
=
layers
.
cast
(
array_len2
,
'int64'
)
array_len2
=
paddle
.
cast
(
array_len2
,
'int64'
)
array_len2
.
stop_gradient
=
True
array_len2
.
stop_gradient
=
True
cond2
=
paddle
.
logical_or
(
x
=
j
,
y
=
array_len2
)
cond2
=
paddle
.
logical_or
(
x
=
j
,
y
=
array_len2
)
cond2
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond2
=
paddle
.
ones
(
shape
=
[
1
],
dtype
=
'int32'
)
cond2
=
layers
.
cast
(
cond2
,
'bool'
)
cond2
=
paddle
.
cast
(
cond2
,
'bool'
)
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
while_op2
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond2
)
while_op2
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond2
)
with
while_op
.
block
():
with
while_op
.
block
():
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
result
=
layers
.
sums
(
input
=
[
d
,
prev
])
result
=
paddle
.
add_n
(
[
d
,
prev
])
i
=
paddle
.
increment
(
x
=
i
)
i
=
paddle
.
increment
(
x
=
i
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
...
@@ -78,7 +78,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -78,7 +78,7 @@ class TestWhileOp(unittest.TestCase):
with
while_op2
.
block
():
with
while_op2
.
block
():
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
result2
=
layers
.
sums
(
input
=
[
d2
,
prev2
])
result2
=
paddle
.
add_n
(
[
d2
,
prev2
])
j
=
paddle
.
increment
(
x
=
j
)
j
=
paddle
.
increment
(
x
=
j
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
...
...
python/paddle/fluid/tests/unittests/sequence/test_sequence_pad_op.py
浏览文件 @
b85af464
...
@@ -17,11 +17,10 @@ import unittest
...
@@ -17,11 +17,10 @@ import unittest
import
numpy
as
np
import
numpy
as
np
import
paddle
sys
.
path
.
append
(
"../"
)
sys
.
path
.
append
(
"../"
)
from
op_test
import
OpTest
from
op_test
import
OpTest
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.core
as
core
import
paddle.fluid.core
as
core
...
@@ -156,9 +155,8 @@ class TestSequencePadOpError(unittest.TestCase):
...
@@ -156,9 +155,8 @@ class TestSequencePadOpError(unittest.TestCase):
def
test_x_variable
():
def
test_x_variable
():
# the input x type must be Variable
# the input x type must be Variable
x
=
np
.
random
.
random
((
2
,
4
)).
astype
(
"float32"
)
x
=
np
.
random
.
random
((
2
,
4
)).
astype
(
"float32"
)
pad_value
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
0.0
],
dtype
=
np
.
float32
)
pad_value
=
paddle
.
assign
(
np
.
array
([
0.0
],
dtype
=
np
.
float32
))
)
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x
,
pad_value
=
pad_value
)
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x
,
pad_value
=
pad_value
)
self
.
assertRaises
(
TypeError
,
test_x_variable
)
self
.
assertRaises
(
TypeError
,
test_x_variable
)
...
@@ -178,9 +176,8 @@ class TestSequencePadOpError(unittest.TestCase):
...
@@ -178,9 +176,8 @@ class TestSequencePadOpError(unittest.TestCase):
x2
=
paddle
.
static
.
data
(
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
-
1
,
10
,
5
],
dtype
=
'int16'
,
lod_level
=
1
name
=
'x2'
,
shape
=
[
-
1
,
10
,
5
],
dtype
=
'int16'
,
lod_level
=
1
)
)
pad_value2
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
0.0
],
dtype
=
np
.
int32
)
pad_value2
=
paddle
.
assign
(
np
.
array
([
0.0
],
dtype
=
np
.
int32
))
)
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x2
,
pad_value
=
pad_value2
x
=
x2
,
pad_value
=
pad_value2
)
)
...
@@ -189,7 +186,8 @@ class TestSequencePadOpError(unittest.TestCase):
...
@@ -189,7 +186,8 @@ class TestSequencePadOpError(unittest.TestCase):
def
test_length_dtype
(
self
):
def
test_length_dtype
(
self
):
x
=
fluid
.
data
(
name
=
'x'
,
shape
=
[
10
,
5
],
dtype
=
'float32'
,
lod_level
=
1
)
x
=
fluid
.
data
(
name
=
'x'
,
shape
=
[
10
,
5
],
dtype
=
'float32'
,
lod_level
=
1
)
pad_value
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
0.0
],
dtype
=
np
.
float32
))
pad_value
=
paddle
.
assign
(
np
.
array
([
0.0
],
dtype
=
np
.
float32
))
out
,
length
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
out
,
length
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pad
(
x
=
x
,
pad_value
=
pad_value
x
=
x
,
pad_value
=
pad_value
)
)
...
...
python/paddle/fluid/tests/unittests/standalone_executor/test_standalone_multiply_write.py
浏览文件 @
b85af464
...
@@ -35,8 +35,8 @@ class TestMultiplyWrite(TestCompatibility):
...
@@ -35,8 +35,8 @@ class TestMultiplyWrite(TestCompatibility):
inp1
=
paddle
.
full
((
1
,),
2
)
inp1
=
paddle
.
full
((
1
,),
2
)
inp2
=
paddle
.
full
((
1
,),
3
)
inp2
=
paddle
.
full
((
1
,),
3
)
paddle
.
fluid
.
layers
.
assign
(
inp1
,
out
)
paddle
.
assign
(
inp1
,
out
)
paddle
.
fluid
.
layers
.
assign
(
inp2
,
out
)
paddle
.
assign
(
inp2
,
out
)
return
main_program
,
startup_program
,
out
return
main_program
,
startup_program
,
out
def
setUp
(
self
):
def
setUp
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_array_read_write_op.py
浏览文件 @
b85af464
...
@@ -47,13 +47,13 @@ def _test_read_write(x):
...
@@ -47,13 +47,13 @@ def _test_read_write(x):
mean_a1
=
paddle
.
mean
(
a1
)
mean_a1
=
paddle
.
mean
(
a1
)
mean_a2
=
paddle
.
mean
(
a2
)
mean_a2
=
paddle
.
mean
(
a2
)
a_sum
=
layers
.
sums
(
input
=
[
mean_a0
,
mean_a1
,
mean_a2
])
a_sum
=
paddle
.
add_n
(
[
mean_a0
,
mean_a1
,
mean_a2
])
mean_x0
=
paddle
.
mean
(
x
[
0
])
mean_x0
=
paddle
.
mean
(
x
[
0
])
mean_x1
=
paddle
.
mean
(
x
[
1
])
mean_x1
=
paddle
.
mean
(
x
[
1
])
mean_x2
=
paddle
.
mean
(
x
[
2
])
mean_x2
=
paddle
.
mean
(
x
[
2
])
x_sum
=
layers
.
sums
(
input
=
[
mean_x0
,
mean_x1
,
mean_x2
])
x_sum
=
paddle
.
add_n
(
[
mean_x0
,
mean_x1
,
mean_x2
])
return
a_sum
,
x_sum
return
a_sum
,
x_sum
...
@@ -81,7 +81,7 @@ class TestArrayReadWrite(unittest.TestCase):
...
@@ -81,7 +81,7 @@ class TestArrayReadWrite(unittest.TestCase):
)
)
self
.
assertEqual
(
outs
[
0
],
outs
[
1
])
self
.
assertEqual
(
outs
[
0
],
outs
[
1
])
total_sum
=
layers
.
sums
(
input
=
[
a_sum
,
x_sum
])
total_sum
=
paddle
.
add_n
(
[
a_sum
,
x_sum
])
total_sum_scaled
=
paddle
.
scale
(
x
=
total_sum
,
scale
=
1
/
6.0
)
total_sum_scaled
=
paddle
.
scale
(
x
=
total_sum
,
scale
=
1
/
6.0
)
append_backward
(
total_sum_scaled
)
append_backward
(
total_sum_scaled
)
...
@@ -116,9 +116,7 @@ class TestArrayReadWrite(unittest.TestCase):
...
@@ -116,9 +116,7 @@ class TestArrayReadWrite(unittest.TestCase):
a_sum_dygraph
,
x_sum_dygraph
=
_test_read_write
(
x_dygraph
)
a_sum_dygraph
,
x_sum_dygraph
=
_test_read_write
(
x_dygraph
)
self
.
assertEqual
(
a_sum_dygraph
,
x_sum_dygraph
)
self
.
assertEqual
(
a_sum_dygraph
,
x_sum_dygraph
)
total_sum_dygraph
=
layers
.
sums
(
total_sum_dygraph
=
paddle
.
add_n
([
a_sum_dygraph
,
x_sum_dygraph
])
input
=
[
a_sum_dygraph
,
x_sum_dygraph
]
)
total_sum_scaled_dygraph
=
paddle
.
scale
(
total_sum_scaled_dygraph
=
paddle
.
scale
(
x
=
total_sum_dygraph
,
scale
=
1
/
6.0
x
=
total_sum_dygraph
,
scale
=
1
/
6.0
)
)
...
...
python/paddle/fluid/tests/unittests/test_assign_op.py
浏览文件 @
b85af464
...
@@ -78,7 +78,7 @@ class TestAssignOpWithLoDTensorArray(unittest.TestCase):
...
@@ -78,7 +78,7 @@ class TestAssignOpWithLoDTensorArray(unittest.TestCase):
z
=
paddle
.
add
(
x
=
x
,
y
=
y
)
z
=
paddle
.
add
(
x
=
x
,
y
=
y
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
0
)
init_array
=
paddle
.
tensor
.
array_write
(
x
=
z
,
i
=
i
)
init_array
=
paddle
.
tensor
.
array_write
(
x
=
z
,
i
=
i
)
array
=
fluid
.
layers
.
assign
(
init_array
)
array
=
paddle
.
assign
(
init_array
)
sums
=
paddle
.
tensor
.
array_read
(
array
=
init_array
,
i
=
i
)
sums
=
paddle
.
tensor
.
array_read
(
array
=
init_array
,
i
=
i
)
mean
=
paddle
.
mean
(
sums
)
mean
=
paddle
.
mean
(
sums
)
append_backward
(
mean
)
append_backward
(
mean
)
...
@@ -110,10 +110,10 @@ class TestAssignOpError(unittest.TestCase):
...
@@ -110,10 +110,10 @@ class TestAssignOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
assign
,
x1
)
self
.
assertRaises
(
TypeError
,
paddle
.
assign
,
x1
)
# When the type of input is numpy.ndarray, the dtype of input must be float32, int32.
# When the type of input is numpy.ndarray, the dtype of input must be float32, int32.
x2
=
np
.
array
([[
2.5
,
2.5
]],
dtype
=
'uint8'
)
x2
=
np
.
array
([[
2.5
,
2.5
]],
dtype
=
'uint8'
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
assign
,
x2
)
self
.
assertRaises
(
TypeError
,
paddle
.
assign
,
x2
)
paddle
.
disable_static
()
paddle
.
disable_static
()
...
@@ -252,7 +252,7 @@ class TestAssignOpErrorApi(unittest.TestCase):
...
@@ -252,7 +252,7 @@ class TestAssignOpErrorApi(unittest.TestCase):
class
TestAssignDoubleGradCheck
(
unittest
.
TestCase
):
class
TestAssignDoubleGradCheck
(
unittest
.
TestCase
):
def
assign_wrapper
(
self
,
x
):
def
assign_wrapper
(
self
,
x
):
return
paddle
.
fluid
.
layers
.
assign
(
x
[
0
])
return
paddle
.
assign
(
x
[
0
])
@
prog_scope
()
@
prog_scope
()
def
func
(
self
,
place
):
def
func
(
self
,
place
):
...
@@ -262,7 +262,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
...
@@ -262,7 +262,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
.
persistable
=
True
data
.
persistable
=
True
out
=
paddle
.
fluid
.
layers
.
assign
(
data
)
out
=
paddle
.
assign
(
data
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
gradient_checker
.
double_grad_check
(
gradient_checker
.
double_grad_check
(
...
@@ -283,7 +283,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
...
@@ -283,7 +283,7 @@ class TestAssignDoubleGradCheck(unittest.TestCase):
class
TestAssignTripleGradCheck
(
unittest
.
TestCase
):
class
TestAssignTripleGradCheck
(
unittest
.
TestCase
):
def
assign_wrapper
(
self
,
x
):
def
assign_wrapper
(
self
,
x
):
return
paddle
.
fluid
.
layers
.
assign
(
x
[
0
])
return
paddle
.
assign
(
x
[
0
])
@
prog_scope
()
@
prog_scope
()
def
func
(
self
,
place
):
def
func
(
self
,
place
):
...
@@ -293,7 +293,7 @@ class TestAssignTripleGradCheck(unittest.TestCase):
...
@@ -293,7 +293,7 @@ class TestAssignTripleGradCheck(unittest.TestCase):
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
=
paddle
.
static
.
data
(
'data'
,
[
3
,
4
,
5
],
dtype
)
data
.
persistable
=
True
data
.
persistable
=
True
out
=
paddle
.
fluid
.
layers
.
assign
(
data
)
out
=
paddle
.
assign
(
data
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
data_arr
=
np
.
random
.
uniform
(
-
1
,
1
,
data
.
shape
).
astype
(
dtype
)
gradient_checker
.
triple_grad_check
(
gradient_checker
.
triple_grad_check
(
...
...
python/paddle/fluid/tests/unittests/test_assign_value_op.py
浏览文件 @
b85af464
...
@@ -20,7 +20,6 @@ import op_test
...
@@ -20,7 +20,6 @@ import op_test
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.framework
as
framework
import
paddle.fluid.framework
as
framework
import
paddle.fluid.layers
as
layers
paddle
.
enable_static
()
paddle
.
enable_static
()
...
@@ -84,7 +83,7 @@ class TestAssignApi(unittest.TestCase):
...
@@ -84,7 +83,7 @@ class TestAssignApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
layers
.
assign
(
input
=
self
.
value
,
output
=
x
)
paddle
.
assign
(
self
.
value
,
output
=
x
)
exe
=
fluid
.
Executor
(
self
.
place
)
exe
=
fluid
.
Executor
(
self
.
place
)
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
...
...
python/paddle/fluid/tests/unittests/test_cast_op.py
浏览文件 @
b85af464
...
@@ -114,7 +114,7 @@ class TestCastOpError(unittest.TestCase):
...
@@ -114,7 +114,7 @@ class TestCastOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
cast
,
x1
,
'int32'
)
self
.
assertRaises
(
TypeError
,
paddle
.
cast
,
x1
,
'int32'
)
class
TestCastOpEager
(
unittest
.
TestCase
):
class
TestCastOpEager
(
unittest
.
TestCase
):
...
...
python/paddle/fluid/tests/unittests/test_communicator_geo.py
浏览文件 @
b85af464
...
@@ -49,7 +49,8 @@ class TestCommunicatorGeoEnd2End(unittest.TestCase):
...
@@ -49,7 +49,8 @@ class TestCommunicatorGeoEnd2End(unittest.TestCase):
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
emb
,
pool_type
=
"sum"
input
=
emb
,
pool_type
=
"sum"
)
)
z
=
fluid
.
layers
.
concat
(
input
=
[
x
,
pool
],
axis
=
1
)
z
=
paddle
.
concat
([
x
,
pool
],
axis
=
1
)
y_predict
=
paddle
.
static
.
nn
.
fc
(
x
=
z
,
size
=
1
)
y_predict
=
paddle
.
static
.
nn
.
fc
(
x
=
z
,
size
=
1
)
y
=
paddle
.
static
.
data
(
name
=
'y'
,
shape
=
[
-
1
,
1
],
dtype
=
'float32'
)
y
=
paddle
.
static
.
data
(
name
=
'y'
,
shape
=
[
-
1
,
1
],
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
square_error_cost
(
input
=
y_predict
,
label
=
y
)
cost
=
paddle
.
nn
.
functional
.
square_error_cost
(
input
=
y_predict
,
label
=
y
)
...
...
python/paddle/fluid/tests/unittests/test_compare_op.py
浏览文件 @
b85af464
...
@@ -450,8 +450,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
...
@@ -450,8 +450,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
def
test_api
(
self
):
def
test_api
(
self
):
paddle
.
enable_static
()
paddle
.
enable_static
()
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
label
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
label
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
limit
=
paddle
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
place
=
fluid
.
CPUPlace
()
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
=
fluid
.
Executor
(
place
)
...
@@ -459,8 +459,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
...
@@ -459,8 +459,8 @@ class API_TestElementwise_Equal(unittest.TestCase):
self
.
assertEqual
((
res
==
np
.
array
([
True
,
False
])).
all
(),
True
)
self
.
assertEqual
((
res
==
np
.
array
([
True
,
False
])).
all
(),
True
)
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
with
fluid
.
program_guard
(
fluid
.
Program
(),
fluid
.
Program
()):
label
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
label
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
out
=
paddle
.
equal
(
x
=
label
,
y
=
limit
)
place
=
fluid
.
CPUPlace
()
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
=
fluid
.
Executor
(
place
)
...
@@ -474,8 +474,8 @@ class TestCompareOpPlace(unittest.TestCase):
...
@@ -474,8 +474,8 @@ class TestCompareOpPlace(unittest.TestCase):
place
=
paddle
.
CPUPlace
()
place
=
paddle
.
CPUPlace
()
if
core
.
is_compiled_with_cuda
():
if
core
.
is_compiled_with_cuda
():
place
=
paddle
.
CUDAPlace
(
0
)
place
=
paddle
.
CUDAPlace
(
0
)
label
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
label
=
paddle
.
assign
(
np
.
array
([
3
,
3
],
dtype
=
"int32"
))
limit
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
limit
=
paddle
.
assign
(
np
.
array
([
3
,
2
],
dtype
=
"int32"
))
out
=
paddle
.
less_than
(
label
,
limit
)
out
=
paddle
.
less_than
(
label
,
limit
)
exe
=
fluid
.
Executor
(
place
)
exe
=
fluid
.
Executor
(
place
)
(
res
,)
=
exe
.
run
(
fetch_list
=
[
out
])
(
res
,)
=
exe
.
run
(
fetch_list
=
[
out
])
...
...
python/paddle/fluid/tests/unittests/test_compare_reduce_op.py
浏览文件 @
b85af464
...
@@ -18,7 +18,6 @@ import numpy as np
...
@@ -18,7 +18,6 @@ import numpy as np
import
op_test
import
op_test
import
paddle
import
paddle
import
paddle.fluid
as
fluid
def
create_test_not_equal_class
(
op_type
,
typename
,
callback
):
def
create_test_not_equal_class
(
op_type
,
typename
,
callback
):
...
@@ -107,8 +106,8 @@ for _type_name in {'float32', 'float64', 'int32', 'int64', 'bool'}:
...
@@ -107,8 +106,8 @@ for _type_name in {'float32', 'float64', 'int32', 'int64', 'bool'}:
class
TestEqualReduceAPI
(
unittest
.
TestCase
):
class
TestEqualReduceAPI
(
unittest
.
TestCase
):
def
test_name
(
self
):
def
test_name
(
self
):
x
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
x
=
paddle
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
y
=
fluid
.
layers
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
y
=
paddle
.
assign
(
np
.
array
([
3
,
4
],
dtype
=
"int32"
))
out
=
paddle
.
equal_all
(
x
,
y
,
name
=
'equal_res'
)
out
=
paddle
.
equal_all
(
x
,
y
,
name
=
'equal_res'
)
assert
'equal_res'
in
out
.
name
assert
'equal_res'
in
out
.
name
...
...
python/paddle/fluid/tests/unittests/test_concat_op.py
浏览文件 @
b85af464
...
@@ -249,8 +249,10 @@ class TestConcatOpError(unittest.TestCase):
...
@@ -249,8 +249,10 @@ class TestConcatOpError(unittest.TestCase):
def
test_errors
(
self
):
def
test_errors
(
self
):
with
program_guard
(
Program
(),
Program
()):
with
program_guard
(
Program
(),
Program
()):
# The input type of concat_op should be list.
# The input type of concat_op should be list.
x1
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'int32'
,
name
=
'x1'
)
x1
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'int32'
,
name
=
'x1'
)
fluid
.
layers
.
concat
(
x1
)
paddle
.
concat
(
x1
)
# The item in input must be Variable.
# The item in input must be Variable.
x2
=
fluid
.
create_lod_tensor
(
x2
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
...
@@ -258,24 +260,25 @@ class TestConcatOpError(unittest.TestCase):
...
@@ -258,24 +260,25 @@ class TestConcatOpError(unittest.TestCase):
x3
=
fluid
.
create_lod_tensor
(
x3
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
CPUPlace
()
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
concat
,
[
x2
])
self
.
assertRaises
(
TypeError
,
paddle
.
concat
,
[
x2
])
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
x4
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x4
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x5
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
x5
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
concat
,
[
x4
,
x5
])
self
.
assertRaises
(
TypeError
,
paddle
.
concat
,
[
x4
,
x5
])
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
x7
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x7'
)
x7
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x7'
)
x8
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float32'
,
name
=
'x8'
)
x8
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float32'
,
name
=
'x8'
)
fluid
.
layers
.
concat
([
x6
,
x7
])
paddle
.
concat
([
x6
,
x7
])
# The type of axis in concat_op should be int or Variable.
# The type of axis in concat_op should be int or Variable.
def
test_axis_type
():
def
test_axis_type
():
fluid
.
layers
.
concat
([
x6
,
x7
],
3.2
)
paddle
.
concat
([
x6
,
x7
],
3.2
)
self
.
assertRaises
(
TypeError
,
test_axis_type
)
self
.
assertRaises
(
TypeError
,
test_axis_type
)
def
test_input_same_dtype
():
def
test_input_same_dtype
():
fluid
.
layers
.
concat
([
x7
,
x8
])
paddle
.
concat
([
x7
,
x8
])
self
.
assertRaises
(
TypeError
,
test_input_same_dtype
)
self
.
assertRaises
(
TypeError
,
test_input_same_dtype
)
...
@@ -284,7 +287,7 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -284,7 +287,7 @@ class TestConcatAPI(unittest.TestCase):
def
test_fluid_api
(
self
):
def
test_fluid_api
(
self
):
paddle
.
enable_static
()
paddle
.
enable_static
()
x_1
=
fluid
.
data
(
shape
=
[
None
,
1
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_1'
)
x_1
=
fluid
.
data
(
shape
=
[
None
,
1
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_1'
)
fluid
.
layers
.
concat
([
x_1
,
x_1
],
0
)
paddle
.
concat
([
x_1
,
x_1
],
0
)
input_2
=
np
.
random
.
random
([
2
,
1
,
4
,
5
]).
astype
(
"int32"
)
input_2
=
np
.
random
.
random
([
2
,
1
,
4
,
5
]).
astype
(
"int32"
)
input_3
=
np
.
random
.
random
([
2
,
2
,
4
,
5
]).
astype
(
"int32"
)
input_3
=
np
.
random
.
random
([
2
,
2
,
4
,
5
]).
astype
(
"int32"
)
...
@@ -292,9 +295,9 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -292,9 +295,9 @@ class TestConcatAPI(unittest.TestCase):
x_3
=
fluid
.
data
(
shape
=
[
2
,
2
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_3'
)
x_3
=
fluid
.
data
(
shape
=
[
2
,
2
,
4
,
5
],
dtype
=
'int32'
,
name
=
'x_3'
)
positive_1_int32
=
fluid
.
layers
.
fill_constant
([
1
],
"int32"
,
1
)
positive_1_int32
=
fluid
.
layers
.
fill_constant
([
1
],
"int32"
,
1
)
positive_1_int64
=
fluid
.
layers
.
fill_constant
([
1
],
"int64"
,
1
)
positive_1_int64
=
fluid
.
layers
.
fill_constant
([
1
],
"int64"
,
1
)
out_1
=
fluid
.
layers
.
concat
(
input
=
[
x_2
,
x_3
],
axis
=
1
)
out_1
=
paddle
.
concat
(
[
x_2
,
x_3
],
axis
=
1
)
out_2
=
fluid
.
layers
.
concat
(
input
=
[
x_2
,
x_3
],
axis
=
positive_1_int32
)
out_2
=
paddle
.
concat
(
[
x_2
,
x_3
],
axis
=
positive_1_int32
)
out_3
=
fluid
.
layers
.
concat
(
input
=
[
x_2
,
x_3
],
axis
=
positive_1_int64
)
out_3
=
paddle
.
concat
(
[
x_2
,
x_3
],
axis
=
positive_1_int64
)
exe
=
fluid
.
Executor
(
place
=
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
place
=
fluid
.
CPUPlace
())
[
res_1
,
res_2
,
res_3
]
=
exe
.
run
(
[
res_1
,
res_2
,
res_3
]
=
exe
.
run
(
...
@@ -344,7 +347,7 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -344,7 +347,7 @@ class TestConcatAPI(unittest.TestCase):
x1
=
paddle
.
to_tensor
(
in1
)
x1
=
paddle
.
to_tensor
(
in1
)
x2
=
paddle
.
to_tensor
(
in2
)
x2
=
paddle
.
to_tensor
(
in2
)
x3
=
paddle
.
to_tensor
(
in3
)
x3
=
paddle
.
to_tensor
(
in3
)
out1
=
fluid
.
layers
.
concat
(
input
=
[
x1
,
x2
,
x3
],
axis
=-
1
)
out1
=
paddle
.
concat
(
[
x1
,
x2
,
x3
],
axis
=-
1
)
out2
=
paddle
.
concat
(
x
=
[
x1
,
x2
],
axis
=
0
)
out2
=
paddle
.
concat
(
x
=
[
x1
,
x2
],
axis
=
0
)
np_out1
=
np
.
concatenate
([
in1
,
in2
,
in3
],
axis
=-
1
)
np_out1
=
np
.
concatenate
([
in1
,
in2
,
in3
],
axis
=-
1
)
np_out2
=
np
.
concatenate
([
in1
,
in2
],
axis
=
0
)
np_out2
=
np
.
concatenate
([
in1
,
in2
],
axis
=
0
)
...
@@ -365,7 +368,7 @@ class TestConcatAPI(unittest.TestCase):
...
@@ -365,7 +368,7 @@ class TestConcatAPI(unittest.TestCase):
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
# The input dtype of concat_op must be float16, float32, float64, int32, int64.
x4
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x4
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x4'
)
x5
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
x5
=
paddle
.
fluid
.
data
(
shape
=
[
4
],
dtype
=
'uint8'
,
name
=
'x5'
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
concat
,
[
x4
,
x5
])
self
.
assertRaises
(
TypeError
,
paddle
.
concat
,
[
x4
,
x5
])
# The type of axis in concat_op should be int or Variable.
# The type of axis in concat_op should be int or Variable.
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
x6
=
paddle
.
static
.
data
(
shape
=
[
-
1
,
4
],
dtype
=
'float16'
,
name
=
'x6'
)
...
@@ -405,7 +408,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -405,7 +408,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
if
use_fluid_api
:
if
use_fluid_api
:
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
...
@@ -414,7 +417,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -414,7 +417,7 @@ class TestConcatAPIWithLoDTensorArray(unittest.TestCase):
for
i
in
range
(
self
.
iter_num
):
for
i
in
range
(
self
.
iter_num
):
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
paddle
.
tensor
.
array_write
(
input
,
zero
+
i
,
tensor_array
)
self
.
out_var
=
fluid
.
layers
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
self
.
out_var
=
paddle
.
concat
(
tensor_array
,
axis
=
self
.
axis
)
else
:
else
:
self
.
program
=
paddle
.
static
.
Program
()
self
.
program
=
paddle
.
static
.
Program
()
with
paddle
.
static
.
program_guard
(
self
.
program
):
with
paddle
.
static
.
program_guard
(
self
.
program
):
...
...
python/paddle/fluid/tests/unittests/test_conditional_block.py
浏览文件 @
b85af464
...
@@ -36,7 +36,7 @@ class ConditionalBlockTest(unittest.TestCase):
...
@@ -36,7 +36,7 @@ class ConditionalBlockTest(unittest.TestCase):
out
=
paddle
.
tensor
.
create_tensor
(
dtype
=
'float32'
)
out
=
paddle
.
tensor
.
create_tensor
(
dtype
=
'float32'
)
with
cond
.
block
():
with
cond
.
block
():
hidden
=
paddle
.
static
.
nn
.
fc
(
x
=
data
,
size
=
10
)
hidden
=
paddle
.
static
.
nn
.
fc
(
x
=
data
,
size
=
10
)
layers
.
assign
(
hidden
,
out
)
paddle
.
assign
(
hidden
,
out
)
cpu
=
core
.
CPUPlace
()
cpu
=
core
.
CPUPlace
()
exe
=
Executor
(
cpu
)
exe
=
Executor
(
cpu
)
...
...
python/paddle/fluid/tests/unittests/test_dataset.py
浏览文件 @
b85af464
...
@@ -949,7 +949,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
...
@@ -949,7 +949,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
data
=
paddle
.
static
.
data
(
data
=
paddle
.
static
.
data
(
name
=
slot
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
slot
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
var
=
fluid
.
layers
.
cast
(
x
=
data
,
dtype
=
'float32'
)
var
=
paddle
.
cast
(
x
=
data
,
dtype
=
'float32'
)
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
var
,
pool_type
=
'AVERAGE'
input
=
var
,
pool_type
=
'AVERAGE'
)
)
...
@@ -957,7 +957,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
...
@@ -957,7 +957,7 @@ class TestDatasetWithFetchHandler(unittest.TestCase):
slots_vars
.
append
(
data
)
slots_vars
.
append
(
data
)
poolings
.
append
(
pool
)
poolings
.
append
(
pool
)
concated
=
fluid
.
layers
.
concat
(
poolings
,
axis
=
1
)
concated
=
paddle
.
concat
(
poolings
,
axis
=
1
)
fc
=
paddle
.
static
.
nn
.
fc
(
x
=
concated
,
activation
=
'tanh'
,
size
=
32
)
fc
=
paddle
.
static
.
nn
.
fc
(
x
=
concated
,
activation
=
'tanh'
,
size
=
32
)
return
slots_vars
,
fc
return
slots_vars
,
fc
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_heter_program.py
浏览文件 @
b85af464
...
@@ -95,7 +95,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
...
@@ -95,7 +95,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
sparse_embed_seq
=
list
(
map
(
embedding_layer
,
inputs
[
1
:
-
1
]))
sparse_embed_seq
=
list
(
map
(
embedding_layer
,
inputs
[
1
:
-
1
]))
concated
=
fluid
.
layers
.
concat
(
sparse_embed_seq
+
inputs
[
0
:
1
],
axis
=
1
)
concated
=
paddle
.
concat
(
sparse_embed_seq
+
inputs
[
0
:
1
],
axis
=
1
)
with
fluid
.
device_guard
(
"gpu"
):
with
fluid
.
device_guard
(
"gpu"
):
fc1
=
paddle
.
static
.
nn
.
fc
(
fc1
=
paddle
.
static
.
nn
.
fc
(
...
@@ -149,7 +149,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
...
@@ -149,7 +149,7 @@ class TestDistFleetHeterProgram(unittest.TestCase):
)
)
with
fluid
.
device_guard
(
"gpu"
):
with
fluid
.
device_guard
(
"gpu"
):
labels
=
fluid
.
layers
.
cast
(
inputs
[
-
1
],
dtype
=
"int64"
)
labels
=
paddle
.
cast
(
inputs
[
-
1
],
dtype
=
"int64"
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
predict
,
label
=
labels
,
reduction
=
'none'
,
use_softmax
=
False
input
=
predict
,
label
=
labels
,
reduction
=
'none'
,
use_softmax
=
False
)
)
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_minimize.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSMinimize(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSMinimize(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps11.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps12.py
浏览文件 @
b85af464
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps13.py
浏览文件 @
b85af464
...
@@ -41,7 +41,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -41,7 +41,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps2.py
浏览文件 @
b85af464
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -40,7 +40,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps3.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps4.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps5.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_ps6.py
浏览文件 @
b85af464
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
...
@@ -37,7 +37,7 @@ class TestPSPassWithBow(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_sparse_embedding_ctr.py
浏览文件 @
b85af464
...
@@ -252,7 +252,7 @@ class TestDistMnistAsync2x2WithGauss(TestFleetBase):
...
@@ -252,7 +252,7 @@ class TestDistMnistAsync2x2WithGauss(TestFleetBase):
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
lr_pool
=
paddle
.
static
.
nn
.
sequence_lod
.
sequence_pool
(
input
=
lr_embbding
,
pool_type
=
"sum"
input
=
lr_embbding
,
pool_type
=
"sum"
)
)
merge_layer
=
fluid
.
layers
.
concat
(
input
=
[
dnn_out
,
lr_pool
],
axis
=
1
)
merge_layer
=
paddle
.
concat
(
[
dnn_out
,
lr_pool
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
x
=
merge_layer
,
size
=
2
,
activation
=
'softmax'
)
)
...
...
python/paddle/fluid/tests/unittests/test_dist_fleet_spmt.py
浏览文件 @
b85af464
...
@@ -35,7 +35,7 @@ class TestSPMT(unittest.TestCase):
...
@@ -35,7 +35,7 @@ class TestSPMT(unittest.TestCase):
def
net
(
self
):
def
net
(
self
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
def
get_acc
(
cos_q_nt
,
cos_q_pt
,
batch_size
):
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
paddle
.
less_than
(
cos_q_nt
,
cos_q_pt
)
cond
=
fluid
.
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
cond_3
=
paddle
.
sum
(
cond
)
cond_3
=
paddle
.
sum
(
cond
)
acc
=
paddle
.
divide
(
acc
=
paddle
.
divide
(
cond_3
,
cond_3
,
...
...
python/paddle/fluid/tests/unittests/test_dist_transpiler.py
浏览文件 @
b85af464
...
@@ -733,9 +733,7 @@ class TestDistLookupTableBase(TranspilerTest):
...
@@ -733,9 +733,7 @@ class TestDistLookupTableBase(TranspilerTest):
title_emb
=
emb_pool
(
title_ids
,
self
.
lookup_table_name
,
is_distributed
)
title_emb
=
emb_pool
(
title_ids
,
self
.
lookup_table_name
,
is_distributed
)
brand_emb
=
emb_pool
(
brand_ids
,
self
.
lookup_table_name
,
is_distributed
)
brand_emb
=
emb_pool
(
brand_ids
,
self
.
lookup_table_name
,
is_distributed
)
profile_emb
=
emb_pool
(
profile_ids
,
"profile_emb"
,
False
)
profile_emb
=
emb_pool
(
profile_ids
,
"profile_emb"
,
False
)
fc0
=
fluid
.
layers
.
concat
(
fc0
=
paddle
.
concat
([
title_emb
,
brand_emb
,
profile_emb
],
axis
=
1
)
input
=
[
title_emb
,
brand_emb
,
profile_emb
],
axis
=
1
)
predict
=
paddle
.
static
.
nn
.
fc
(
predict
=
paddle
.
static
.
nn
.
fc
(
x
=
fc0
,
x
=
fc0
,
size
=
2
,
size
=
2
,
...
...
python/paddle/fluid/tests/unittests/test_dynamic_rnn_stop_gradient.py
浏览文件 @
b85af464
...
@@ -29,7 +29,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
...
@@ -29,7 +29,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
fluid
.
default_main_program
().
random_seed
=
1
fluid
.
default_main_program
().
random_seed
=
1
np
.
random
.
seed
(
2
)
np
.
random
.
seed
(
2
)
x
=
layers
.
assign
(
x
=
paddle
.
assign
(
np
.
random
.
rand
(
batch_size
,
beam_size
,
32
).
astype
(
"float32"
)
np
.
random
.
rand
(
batch_size
,
beam_size
,
32
).
astype
(
"float32"
)
)
)
indices
=
fluid
.
data
(
shape
=
[
None
,
beam_size
],
dtype
=
"int64"
,
name
=
"indices"
)
indices
=
fluid
.
data
(
shape
=
[
None
,
beam_size
],
dtype
=
"int64"
,
name
=
"indices"
)
...
@@ -43,9 +43,9 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
...
@@ -43,9 +43,9 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
)
while_op
=
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
)
scores
=
paddle
.
tensor
.
array_write
(
x
,
step_idx
)
scores
=
paddle
.
tensor
.
array_write
(
x
,
step_idx
)
with
while_op
.
block
():
with
while_op
.
block
():
bs
=
layers
.
cast
(
paddle
.
shape
(
x
)[
0
],
"int64"
)
bs
=
paddle
.
cast
(
paddle
.
shape
(
x
)[
0
],
"int64"
)
for
_
in
range
(
20
):
for
_
in
range
(
20
):
bs
=
layers
.
cast
(
bs
,
'int64'
)
bs
=
paddle
.
cast
(
bs
,
'int64'
)
bs
.
stop_gradient
=
stop_gradient
bs
.
stop_gradient
=
stop_gradient
batch_pos
=
paddle
.
expand
(
batch_pos
=
paddle
.
expand
(
paddle
.
unsqueeze
(
paddle
.
arange
(
0
,
bs
,
1
,
dtype
=
bs
.
dtype
),
[
1
]),
paddle
.
unsqueeze
(
paddle
.
arange
(
0
,
bs
,
1
,
dtype
=
bs
.
dtype
),
[
1
]),
...
@@ -57,7 +57,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
...
@@ -57,7 +57,7 @@ def build_and_run_program(place, batch_size, beam_size, stop_gradient=False):
paddle
.
increment
(
x
=
step_idx
,
value
=
1.0
)
paddle
.
increment
(
x
=
step_idx
,
value
=
1.0
)
paddle
.
tensor
.
array_write
(
score
,
i
=
step_idx
,
array
=
scores
)
paddle
.
tensor
.
array_write
(
score
,
i
=
step_idx
,
array
=
scores
)
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
length_cond
=
paddle
.
less_than
(
x
=
step_idx
,
y
=
max_len
)
layers
.
assign
(
length_cond
,
cond
)
paddle
.
assign
(
length_cond
,
cond
)
out
=
tensor_array_to_tensor
(
scores
,
axis
=
0
,
use_stack
=
True
)[
0
]
out
=
tensor_array_to_tensor
(
scores
,
axis
=
0
,
use_stack
=
True
)[
0
]
loss
=
paddle
.
mean
(
out
)
loss
=
paddle
.
mean
(
out
)
...
...
python/paddle/fluid/tests/unittests/test_eager_deletion_padding_rnn.py
浏览文件 @
b85af464
...
@@ -164,7 +164,7 @@ def lm_model(
...
@@ -164,7 +164,7 @@ def lm_model(
weight_1
=
weight_1_arr
[
k
]
weight_1
=
weight_1_arr
[
k
]
bias
=
bias_arr
[
k
]
bias
=
bias_arr
[
k
]
nn
=
layers
.
concat
([
input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -230,8 +230,8 @@ def lm_model(
...
@@ -230,8 +230,8 @@ def lm_model(
)
)
last_cell_array
.
append
(
last_c
)
last_cell_array
.
append
(
last_c
)
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
layers
.
concat
(
last_hidden_array
,
0
)
last_hidden
=
paddle
.
concat
(
last_hidden_array
,
0
)
last_cell
=
layers
.
concat
(
last_cell_array
,
0
)
last_cell
=
paddle
.
concat
(
last_cell_array
,
0
)
return
real_res
,
last_hidden
,
last_cell
return
real_res
,
last_hidden
,
last_cell
...
@@ -288,7 +288,7 @@ def lm_model(
...
@@ -288,7 +288,7 @@ def lm_model(
weight_1
=
weight_1_arr
[
k
]
weight_1
=
weight_1_arr
[
k
]
bias
=
bias_arr
[
k
]
bias
=
bias_arr
[
k
]
nn
=
layers
.
concat
([
input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -314,19 +314,19 @@ def lm_model(
...
@@ -314,19 +314,19 @@ def lm_model(
res
.
append
(
input
)
res
.
append
(
input
)
last_hidden
=
layers
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
last_hidden
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
layers
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
last_cell
,
shape
=
[
-
1
,
num_layers
,
hidden_size
]
)
)
last_cell
=
paddle
.
transpose
(
x
=
last_cell
,
perm
=
[
1
,
0
,
2
])
last_cell
=
paddle
.
transpose
(
x
=
last_cell
,
perm
=
[
1
,
0
,
2
])
real_res
=
layers
.
concat
(
res
,
0
)
real_res
=
paddle
.
concat
(
res
,
0
)
real_res
=
paddle
.
reshape
(
real_res
,
shape
=
[
len
,
-
1
,
hidden_size
])
real_res
=
paddle
.
reshape
(
real_res
,
shape
=
[
len
,
-
1
,
hidden_size
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
...
@@ -439,8 +439,8 @@ def lm_model(
...
@@ -439,8 +439,8 @@ def lm_model(
# can be used directly in next batch. This can avoid the fetching of
# can be used directly in next batch. This can avoid the fetching of
# last_hidden and last_cell and feeding of init_hidden and init_cell in
# last_hidden and last_cell and feeding of init_hidden and init_cell in
# each training step.
# each training step.
layers
.
assign
(
input
=
last_cell
,
output
=
init_cell
)
paddle
.
assign
(
last_cell
,
output
=
init_cell
)
layers
.
assign
(
input
=
last_hidden
,
output
=
init_hidden
)
paddle
.
assign
(
last_hidden
,
output
=
init_hidden
)
feeding_list
=
[
'x'
,
'y'
,
'init_hidden'
,
'init_cell'
]
feeding_list
=
[
'x'
,
'y'
,
'init_hidden'
,
'init_cell'
]
return
loss
,
last_hidden
,
last_cell
,
feeding_list
return
loss
,
last_hidden
,
last_cell
,
feeding_list
...
...
python/paddle/fluid/tests/unittests/test_eager_deletion_recurrent_op.py
浏览文件 @
b85af464
...
@@ -427,7 +427,7 @@ class EagerDeletionRecurrentOpMultipleMemoryTest(EagerDeletionRecurrentOpTest1):
...
@@ -427,7 +427,7 @@ class EagerDeletionRecurrentOpMultipleMemoryTest(EagerDeletionRecurrentOpTest1):
mem1
=
paddle
.
scale
(
x
=
h_pre1
,
scale
=
1.0
)
mem1
=
paddle
.
scale
(
x
=
h_pre1
,
scale
=
1.0
)
mem2
=
paddle
.
scale
(
x
=
h_pre2
,
scale
=
1.0
)
mem2
=
paddle
.
scale
(
x
=
h_pre2
,
scale
=
1.0
)
out
=
layers
.
sums
(
input
=
[
mem1
,
x_t
,
mem2
])
out
=
paddle
.
add_n
(
[
mem1
,
x_t
,
mem2
])
rnn
.
update_memory
(
h_pre1
,
mem1
)
rnn
.
update_memory
(
h_pre1
,
mem1
)
rnn
.
update_memory
(
h_pre2
,
mem2
)
rnn
.
update_memory
(
h_pre2
,
mem2
)
...
...
python/paddle/fluid/tests/unittests/test_eager_deletion_while_op.py
浏览文件 @
b85af464
...
@@ -104,7 +104,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
...
@@ -104,7 +104,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
d
=
paddle
.
reshape
(
d
,
shape
=
[
10
])
d
=
paddle
.
reshape
(
d
,
shape
=
[
10
])
prev
=
paddle
.
reshape
(
prev
,
shape
=
[
10
])
prev
=
paddle
.
reshape
(
prev
,
shape
=
[
10
])
result
=
layers
.
sums
(
input
=
[
d
,
prev
])
result
=
paddle
.
add_n
(
[
d
,
prev
])
i
=
paddle
.
increment
(
x
=
i
)
i
=
paddle
.
increment
(
x
=
i
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
...
@@ -114,7 +114,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
...
@@ -114,7 +114,7 @@ class TestEagerDeletionWhileOpBase(unittest.TestCase):
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
d2
=
paddle
.
reshape
(
d2
,
shape
=
[
10
])
d2
=
paddle
.
reshape
(
d2
,
shape
=
[
10
])
prev2
=
paddle
.
reshape
(
prev2
,
shape
=
[
10
])
prev2
=
paddle
.
reshape
(
prev2
,
shape
=
[
10
])
result2
=
layers
.
sums
(
input
=
[
d2
,
prev2
])
result2
=
paddle
.
add_n
(
[
d2
,
prev2
])
j
=
paddle
.
increment
(
x
=
j
)
j
=
paddle
.
increment
(
x
=
j
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
...
...
python/paddle/fluid/tests/unittests/test_embedding_id_stop_gradient.py
浏览文件 @
b85af464
...
@@ -51,7 +51,7 @@ class TestEmbeddingIdStopGradientBase(unittest.TestCase):
...
@@ -51,7 +51,7 @@ class TestEmbeddingIdStopGradientBase(unittest.TestCase):
with
fluid
.
scope_guard
(
scope
):
with
fluid
.
scope_guard
(
scope
):
x_1
=
fluid
.
data
(
name
=
'x1'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x_1
=
fluid
.
data
(
name
=
'x1'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x_2
=
fluid
.
data
(
name
=
'x2'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x_2
=
fluid
.
data
(
name
=
'x2'
,
shape
=
[
4
,
1
],
dtype
=
'int64'
)
x
=
fluid
.
layers
.
concat
([
x_1
,
x_2
],
axis
=-
1
)
x
=
paddle
.
concat
([
x_1
,
x_2
],
axis
=-
1
)
for
_
in
range
(
self
.
reshape_times
):
for
_
in
range
(
self
.
reshape_times
):
x
=
paddle
.
reshape
(
x
,
[
-
1
,
1
])
x
=
paddle
.
reshape
(
x
,
[
-
1
,
1
])
...
...
python/paddle/fluid/tests/unittests/test_fetch_var.py
浏览文件 @
b85af464
...
@@ -18,7 +18,6 @@ import numpy as np
...
@@ -18,7 +18,6 @@ import numpy as np
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.layers
as
layers
class
TestFetchVar
(
unittest
.
TestCase
):
class
TestFetchVar
(
unittest
.
TestCase
):
...
@@ -30,7 +29,7 @@ class TestFetchVar(unittest.TestCase):
...
@@ -30,7 +29,7 @@ class TestFetchVar(unittest.TestCase):
x
=
paddle
.
tensor
.
create_tensor
(
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
"int32"
,
persistable
=
True
,
name
=
"x"
dtype
=
"int32"
,
persistable
=
True
,
name
=
"x"
)
)
layers
.
assign
(
input
=
self
.
val
,
output
=
x
)
paddle
.
assign
(
self
.
val
,
output
=
x
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
.
run
(
fluid
.
default_main_program
(),
feed
=
{},
fetch_list
=
[])
exe
.
run
(
fluid
.
default_main_program
(),
feed
=
{},
fetch_list
=
[])
fetched_x
=
fluid
.
executor
.
_fetch_var
(
"x"
)
fetched_x
=
fluid
.
executor
.
_fetch_var
(
"x"
)
...
...
python/paddle/fluid/tests/unittests/test_fleet.py
浏览文件 @
b85af464
...
@@ -78,7 +78,7 @@ class TestFleet1(unittest.TestCase):
...
@@ -78,7 +78,7 @@ class TestFleet1(unittest.TestCase):
dtype
=
"int64"
,
dtype
=
"int64"
,
lod_level
=
1
,
lod_level
=
1
,
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_nocvm_1.py
浏览文件 @
b85af464
...
@@ -72,7 +72,7 @@ class TestFleet1(unittest.TestCase):
...
@@ -72,7 +72,7 @@ class TestFleet1(unittest.TestCase):
dtype
=
"int64"
,
dtype
=
"int64"
,
lod_level
=
1
,
lod_level
=
1
,
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_rolemaker.py
浏览文件 @
b85af464
...
@@ -89,7 +89,7 @@ class TestCloudRoleMaker(unittest.TestCase):
...
@@ -89,7 +89,7 @@ class TestCloudRoleMaker(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_2.py
浏览文件 @
b85af464
...
@@ -70,7 +70,7 @@ class TestCloudRoleMaker2(unittest.TestCase):
...
@@ -70,7 +70,7 @@ class TestCloudRoleMaker2(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_rolemaker_3.py
浏览文件 @
b85af464
...
@@ -63,7 +63,7 @@ class TestCloudRoleMaker(unittest.TestCase):
...
@@ -63,7 +63,7 @@ class TestCloudRoleMaker(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
try
:
try
:
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.000005
)
...
...
python/paddle/fluid/tests/unittests/test_fleet_unitaccessor.py
浏览文件 @
b85af464
...
@@ -66,7 +66,7 @@ class TestFleet1(unittest.TestCase):
...
@@ -66,7 +66,7 @@ class TestFleet1(unittest.TestCase):
label
=
paddle
.
static
.
data
(
label
=
paddle
.
static
.
data
(
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
name
=
"click"
,
shape
=
[
-
1
,
1
],
dtype
=
"int64"
,
lod_level
=
1
)
)
label_cast
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'float32'
)
label_cast
=
paddle
.
cast
(
label
,
dtype
=
'float32'
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
cost
=
paddle
.
nn
.
functional
.
log_loss
(
fc
,
label_cast
)
strategy
=
{}
strategy
=
{}
...
...
python/paddle/fluid/tests/unittests/test_imperative_auto_prune.py
浏览文件 @
b85af464
...
@@ -88,8 +88,8 @@ class AutoPruneLayer2(fluid.Layer):
...
@@ -88,8 +88,8 @@ class AutoPruneLayer2(fluid.Layer):
def
forward
(
self
,
x
,
label
):
def
forward
(
self
,
x
,
label
):
feature
=
self
.
linear
(
x
)
feature
=
self
.
linear
(
x
)
label
=
self
.
linear2
(
label
)
label
=
self
.
linear2
(
label
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"float32"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"float32"
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'int64'
)
label
=
paddle
.
cast
(
label
,
dtype
=
'int64'
)
# Note that the label is not persistable in paddle.nn.functional.cross_entropy.
# Note that the label is not persistable in paddle.nn.functional.cross_entropy.
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
feature
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
input
=
feature
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
...
@@ -244,7 +244,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -244,7 +244,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
out1
=
linear
(
a
)
out1
=
linear
(
a
)
out2
=
linear2
(
b
)
out2
=
linear2
(
b
)
out1
.
stop_gradient
=
True
out1
.
stop_gradient
=
True
out
=
fluid
.
layers
.
concat
(
input
=
[
out1
,
out2
,
c
],
axis
=
1
)
out
=
paddle
.
concat
(
[
out1
,
out2
,
c
],
axis
=
1
)
out
.
backward
()
out
.
backward
()
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
...
@@ -262,7 +262,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -262,7 +262,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
out1
=
linear
(
a
)
out1
=
linear
(
a
)
out2
=
linear2
(
b
)
out2
=
linear2
(
b
)
out1
.
stop_gradient
=
True
out1
.
stop_gradient
=
True
out
=
fluid
.
layers
.
concat
(
input
=
[
out1
,
out2
,
c
],
axis
=
1
)
out
=
paddle
.
concat
(
[
out1
,
out2
,
c
],
axis
=
1
)
out
.
backward
()
out
.
backward
()
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
linear
.
weight
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
self
.
assertIsNone
(
out1
.
gradient
())
...
@@ -338,7 +338,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -338,7 +338,7 @@ class TestImperativeAutoPrune(unittest.TestCase):
out1
=
linear
(
a
)
out1
=
linear
(
a
)
out2
=
linear2
(
b
)
out2
=
linear2
(
b
)
out1
.
stop_gradient
=
True
out1
.
stop_gradient
=
True
out
=
fluid
.
layers
.
concat
(
input
=
[
out1
,
out2
,
c
],
axis
=
1
)
out
=
paddle
.
concat
(
[
out1
,
out2
,
c
],
axis
=
1
)
# TODO(jiabin): In Eager Mode we don't actually need sort_sum_gradient, this test should be removed when we don't support fluid anymore.
# TODO(jiabin): In Eager Mode we don't actually need sort_sum_gradient, this test should be removed when we don't support fluid anymore.
fluid
.
set_flags
({
'FLAGS_sort_sum_gradient'
:
True
})
fluid
.
set_flags
({
'FLAGS_sort_sum_gradient'
:
True
})
out
.
backward
()
out
.
backward
()
...
@@ -413,8 +413,8 @@ class TestImperativeAutoPrune(unittest.TestCase):
...
@@ -413,8 +413,8 @@ class TestImperativeAutoPrune(unittest.TestCase):
linear
=
paddle
.
nn
.
Linear
(
1
,
1
)
linear
=
paddle
.
nn
.
Linear
(
1
,
1
)
label
=
fluid
.
dygraph
.
to_variable
(
value1
).
astype
(
"float32"
)
label
=
fluid
.
dygraph
.
to_variable
(
value1
).
astype
(
"float32"
)
label
=
linear
(
label
)
label
=
linear
(
label
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
"float32"
)
label
=
paddle
.
cast
(
label
,
dtype
=
"float32"
)
label
=
fluid
.
layers
.
cast
(
label
,
dtype
=
'int64'
)
label
=
paddle
.
cast
(
label
,
dtype
=
'int64'
)
out
=
paddle
.
nn
.
functional
.
one_hot
(
label
,
100
)
out
=
paddle
.
nn
.
functional
.
one_hot
(
label
,
100
)
loss
=
paddle
.
mean
(
out
)
loss
=
paddle
.
mean
(
out
)
loss
.
backward
()
loss
.
backward
()
...
...
python/paddle/fluid/tests/unittests/test_imperative_deepcf.py
浏览文件 @
b85af464
...
@@ -105,9 +105,7 @@ class MLP(fluid.Layer):
...
@@ -105,9 +105,7 @@ class MLP(fluid.Layer):
def
forward
(
self
,
users
,
items
):
def
forward
(
self
,
users
,
items
):
users
=
self
.
_user_latent
(
users
)
users
=
self
.
_user_latent
(
users
)
items
=
self
.
_item_latent
(
items
)
items
=
self
.
_item_latent
(
items
)
match_vec
=
fluid
.
layers
.
concat
(
match_vec
=
paddle
.
concat
([
users
,
items
],
axis
=
len
(
users
.
shape
)
-
1
)
[
users
,
items
],
axis
=
len
(
users
.
shape
)
-
1
)
for
l
in
self
.
_match_layers
:
for
l
in
self
.
_match_layers
:
match_vec
=
l
(
match_vec
)
match_vec
=
l
(
match_vec
)
return
match_vec
return
match_vec
...
@@ -144,7 +142,7 @@ class DeepCF(fluid.Layer):
...
@@ -144,7 +142,7 @@ class DeepCF(fluid.Layer):
mlp_predictive
=
self
.
_mlp
(
users_emb
,
items_emb
)
mlp_predictive
=
self
.
_mlp
(
users_emb
,
items_emb
)
dmf_predictive
=
self
.
_dmf
(
users_emb
,
items_emb
)
dmf_predictive
=
self
.
_dmf
(
users_emb
,
items_emb
)
predictive
=
fluid
.
layers
.
concat
(
predictive
=
paddle
.
concat
(
[
mlp_predictive
,
dmf_predictive
],
axis
=
len
(
mlp_predictive
.
shape
)
-
1
[
mlp_predictive
,
dmf_predictive
],
axis
=
len
(
mlp_predictive
.
shape
)
-
1
)
)
prediction
=
self
.
_match_fc
(
predictive
)
prediction
=
self
.
_match_fc
(
predictive
)
...
...
python/paddle/fluid/tests/unittests/test_imperative_ocr_attention_model.py
浏览文件 @
b85af464
...
@@ -189,7 +189,7 @@ class DynamicGRU(fluid.dygraph.Layer):
...
@@ -189,7 +189,7 @@ class DynamicGRU(fluid.dygraph.Layer):
res
=
[
hidden_
]
+
res
res
=
[
hidden_
]
+
res
else
:
else
:
res
.
append
(
hidden_
)
res
.
append
(
hidden_
)
res
=
fluid
.
layers
.
concat
(
res
,
axis
=
1
)
res
=
paddle
.
concat
(
res
,
axis
=
1
)
return
res
return
res
...
@@ -270,9 +270,7 @@ class EncoderNet(fluid.dygraph.Layer):
...
@@ -270,9 +270,7 @@ class EncoderNet(fluid.dygraph.Layer):
gru_backward
=
self
.
gru_backward_layer
(
fc_2
)
gru_backward
=
self
.
gru_backward_layer
(
fc_2
)
encoded_vector
=
fluid
.
layers
.
concat
(
encoded_vector
=
paddle
.
concat
([
gru_forward
,
gru_backward
],
axis
=
2
)
input
=
[
gru_forward
,
gru_backward
],
axis
=
2
)
encoded_proj
=
self
.
encoded_proj_fc
(
encoded_vector
)
encoded_proj
=
self
.
encoded_proj_fc
(
encoded_vector
)
...
@@ -356,7 +354,7 @@ class GRUDecoderWithAttention(fluid.dygraph.Layer):
...
@@ -356,7 +354,7 @@ class GRUDecoderWithAttention(fluid.dygraph.Layer):
out
=
paddle
.
nn
.
functional
.
softmax
(
out
)
out
=
paddle
.
nn
.
functional
.
softmax
(
out
)
res
.
append
(
out
)
res
.
append
(
out
)
res1
=
fluid
.
layers
.
concat
(
res
,
axis
=
1
)
res1
=
paddle
.
concat
(
res
,
axis
=
1
)
return
res1
return
res1
...
...
python/paddle/fluid/tests/unittests/test_imperative_ptb_rnn.py
浏览文件 @
b85af464
...
@@ -106,7 +106,7 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -106,7 +106,7 @@ class SimpleLSTMRNN(fluid.Layer):
weight_1
=
self
.
weight_1_arr
[
k
]
weight_1
=
self
.
weight_1_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
nn
=
fluid
.
layers
.
concat
([
self
.
_input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
self
.
_input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -130,14 +130,14 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -130,14 +130,14 @@ class SimpleLSTMRNN(fluid.Layer):
res
.
append
(
res
.
append
(
paddle
.
reshape
(
self
.
_input
,
shape
=
[
1
,
-
1
,
self
.
_hidden_size
])
paddle
.
reshape
(
self
.
_input
,
shape
=
[
1
,
-
1
,
self
.
_hidden_size
])
)
)
real_res
=
fluid
.
layers
.
concat
(
res
,
0
)
real_res
=
paddle
.
concat
(
res
,
0
)
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
fluid
.
layers
.
concat
(
self
.
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
self
.
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
fluid
.
layers
.
concat
(
self
.
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
self
.
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
...
...
python/paddle/fluid/tests/unittests/test_imperative_save_load_v2.py
浏览文件 @
b85af464
...
@@ -103,7 +103,7 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -103,7 +103,7 @@ class SimpleLSTMRNN(fluid.Layer):
weight_1
=
self
.
weight_1_arr
[
k
]
weight_1
=
self
.
weight_1_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
nn
=
fluid
.
layers
.
concat
([
self
.
_input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
self
.
_input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -127,14 +127,14 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -127,14 +127,14 @@ class SimpleLSTMRNN(fluid.Layer):
res
.
append
(
res
.
append
(
paddle
.
reshape
(
self
.
_input
,
shape
=
[
1
,
-
1
,
self
.
_hidden_size
])
paddle
.
reshape
(
self
.
_input
,
shape
=
[
1
,
-
1
,
self
.
_hidden_size
])
)
)
real_res
=
fluid
.
layers
.
concat
(
res
,
0
)
real_res
=
paddle
.
concat
(
res
,
0
)
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
fluid
.
layers
.
concat
(
self
.
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
self
.
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
fluid
.
layers
.
concat
(
self
.
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
self
.
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
...
...
python/paddle/fluid/tests/unittests/test_imperative_star_gan_with_gradient_penalty.py
浏览文件 @
b85af464
...
@@ -314,7 +314,7 @@ class Generator(fluid.dygraph.Layer):
...
@@ -314,7 +314,7 @@ class Generator(fluid.dygraph.Layer):
label_trg_e
=
paddle
.
reshape
(
label_trg
,
[
-
1
,
label_trg
.
shape
[
1
],
1
,
1
])
label_trg_e
=
paddle
.
reshape
(
label_trg
,
[
-
1
,
label_trg
.
shape
[
1
],
1
,
1
])
label_trg_e
=
paddle
.
expand
(
label_trg_e
,
[
-
1
,
-
1
,
shape
[
2
],
shape
[
3
]])
label_trg_e
=
paddle
.
expand
(
label_trg_e
,
[
-
1
,
-
1
,
shape
[
2
],
shape
[
3
]])
input1
=
fluid
.
layers
.
concat
([
input
,
label_trg_e
],
1
)
input1
=
paddle
.
concat
([
input
,
label_trg_e
],
1
)
conv0
=
self
.
_conv0
(
input1
)
conv0
=
self
.
_conv0
(
input1
)
res_block
=
self
.
_res_block
(
conv0
)
res_block
=
self
.
_res_block
(
conv0
)
...
...
python/paddle/fluid/tests/unittests/test_layers.py
浏览文件 @
b85af464
...
@@ -1645,8 +1645,8 @@ class TestBook(LayerTest):
...
@@ -1645,8 +1645,8 @@ class TestBook(LayerTest):
param_attr
=
'shared_w'
,
param_attr
=
'shared_w'
,
)
)
concat_embed
=
layers
.
concat
(
concat_embed
=
paddle
.
concat
(
input
=
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
[
embed_first
,
embed_second
,
embed_third
,
embed_forth
],
axis
=
1
,
axis
=
1
,
)
)
...
@@ -1722,7 +1722,7 @@ class TestBook(LayerTest):
...
@@ -1722,7 +1722,7 @@ class TestBook(LayerTest):
embs
.
append
(
emb
)
embs
.
append
(
emb
)
embs
=
layers
.
concat
(
input
=
embs
,
axis
=
1
)
embs
=
paddle
.
concat
(
embs
,
axis
=
1
)
loss
=
paddle
.
static
.
nn
.
nce
(
loss
=
paddle
.
static
.
nn
.
nce
(
input
=
embs
,
input
=
embs
,
label
=
words
[
label_word
],
label
=
words
[
label_word
],
...
...
python/paddle/fluid/tests/unittests/test_math_op_patch.py
浏览文件 @
b85af464
...
@@ -30,7 +30,7 @@ class TestMathOpPatches(unittest.TestCase):
...
@@ -30,7 +30,7 @@ class TestMathOpPatches(unittest.TestCase):
def
test_add_scalar
(
self
):
def
test_add_scalar
(
self
):
a
=
paddle
.
static
.
data
(
name
=
"a"
,
shape
=
[
-
1
,
1
])
a
=
paddle
.
static
.
data
(
name
=
"a"
,
shape
=
[
-
1
,
1
])
b
=
a
+
10
b
=
a
+
10
ab
=
fluid
.
layers
.
concat
(
input
=
[
a
,
b
],
axis
=
1
)
ab
=
paddle
.
concat
(
[
a
,
b
],
axis
=
1
)
c
=
ab
+
10
c
=
ab
+
10
d
=
ab
+
a
d
=
ab
+
a
# e = a + ab
# e = a + ab
...
...
python/paddle/fluid/tests/unittests/test_mix_precision_all_reduce_fuse.py
浏览文件 @
b85af464
...
@@ -51,7 +51,7 @@ def conv_net(use_feed):
...
@@ -51,7 +51,7 @@ def conv_net(use_feed):
)
)
conv_pool_1
=
paddle
.
static
.
nn
.
batch_norm
(
conv_pool_1
)
conv_pool_1
=
paddle
.
static
.
nn
.
batch_norm
(
conv_pool_1
)
conv_pool_1
=
fluid
.
layers
.
cast
(
conv_pool_1
,
np
.
float32
)
conv_pool_1
=
paddle
.
cast
(
conv_pool_1
,
np
.
float32
)
conv_pool_2
=
fluid
.
nets
.
simple_img_conv_pool
(
conv_pool_2
=
fluid
.
nets
.
simple_img_conv_pool
(
input
=
conv_pool_1
,
input
=
conv_pool_1
,
filter_size
=
5
,
filter_size
=
5
,
...
@@ -60,7 +60,7 @@ def conv_net(use_feed):
...
@@ -60,7 +60,7 @@ def conv_net(use_feed):
pool_stride
=
2
,
pool_stride
=
2
,
act
=
"relu"
,
act
=
"relu"
,
)
)
hidden
=
fluid
.
layers
.
cast
(
conv_pool_2
,
np
.
float32
)
hidden
=
paddle
.
cast
(
conv_pool_2
,
np
.
float32
)
return
loss_net
(
hidden
,
label
)
return
loss_net
(
hidden
,
label
)
...
...
python/paddle/fluid/tests/unittests/test_nn_functional_hot_op.py
浏览文件 @
b85af464
...
@@ -171,7 +171,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -171,7 +171,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
num_classes
)
self
.
_run
(
num_classes
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
num_classes
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
num_classes
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
num_classes
)
self
.
_run
(
num_classes
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_nonzero_api.py
浏览文件 @
b85af464
...
@@ -30,7 +30,7 @@ class TestNonZeroAPI(unittest.TestCase):
...
@@ -30,7 +30,7 @@ class TestNonZeroAPI(unittest.TestCase):
y
=
paddle
.
nonzero
(
x
,
as_tuple
=
True
)
y
=
paddle
.
nonzero
(
x
,
as_tuple
=
True
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
2
)
self
.
assertEqual
(
len
(
y
),
2
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
...
@@ -46,7 +46,7 @@ class TestNonZeroAPI(unittest.TestCase):
...
@@ -46,7 +46,7 @@ class TestNonZeroAPI(unittest.TestCase):
y
=
paddle
.
nonzero
(
x
,
as_tuple
=
True
)
y
=
paddle
.
nonzero
(
x
,
as_tuple
=
True
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
1
)
self
.
assertEqual
(
len
(
y
),
1
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
...
python/paddle/fluid/tests/unittests/test_one_hot_v2_op.py
浏览文件 @
b85af464
...
@@ -179,7 +179,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -179,7 +179,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
depth
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
depth
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/test_optimizer_grad.py
浏览文件 @
b85af464
...
@@ -114,7 +114,7 @@ class SimpleNetWithCond:
...
@@ -114,7 +114,7 @@ class SimpleNetWithCond:
cond_useless
=
paddle
.
multiply
(
param_z
,
param_z
)
cond_useless
=
paddle
.
multiply
(
param_z
,
param_z
)
return
cond_res
return
cond_res
cond_i
=
fluid
.
layers
.
assign
(
np
.
array
([
cond_i
],
dtype
=
'float32'
))
cond_i
=
paddle
.
assign
(
np
.
array
([
cond_i
],
dtype
=
'float32'
))
sum_cond
=
paddle
.
static
.
nn
.
cond
(
cond_i
>
1.0
,
cond_true
,
cond_false
)
sum_cond
=
paddle
.
static
.
nn
.
cond
(
cond_i
>
1.0
,
cond_true
,
cond_false
)
sum_all
=
paddle
.
add_n
([
sum_xy
,
sub_yz
,
sum_cond
])
sum_all
=
paddle
.
add_n
([
sum_xy
,
sub_yz
,
sum_cond
])
mean_out
=
paddle
.
mean
(
sum_all
)
mean_out
=
paddle
.
mean
(
sum_all
)
...
...
python/paddle/fluid/tests/unittests/test_recurrent_op.py
浏览文件 @
b85af464
...
@@ -415,7 +415,7 @@ class RecurrentOpMultipleMemoryTest(RecurrentOpTest1):
...
@@ -415,7 +415,7 @@ class RecurrentOpMultipleMemoryTest(RecurrentOpTest1):
mem1
=
paddle
.
scale
(
x
=
h_pre1
,
scale
=
1.0
)
mem1
=
paddle
.
scale
(
x
=
h_pre1
,
scale
=
1.0
)
mem2
=
paddle
.
scale
(
x
=
h_pre2
,
scale
=
1.0
)
mem2
=
paddle
.
scale
(
x
=
h_pre2
,
scale
=
1.0
)
out
=
layers
.
sums
(
input
=
[
mem1
,
x_t
,
mem2
])
out
=
paddle
.
add_n
(
[
mem1
,
x_t
,
mem2
])
rnn
.
update_memory
(
h_pre1
,
mem1
)
rnn
.
update_memory
(
h_pre1
,
mem1
)
rnn
.
update_memory
(
h_pre2
,
mem2
)
rnn
.
update_memory
(
h_pre2
,
mem2
)
...
@@ -620,7 +620,7 @@ class RecurrentOpSubBlockTest(RecurrentOpTest1):
...
@@ -620,7 +620,7 @@ class RecurrentOpSubBlockTest(RecurrentOpTest1):
init_value
=
0.0
,
init_value
=
0.0
,
)
)
step_in
=
rnn
.
step_input
(
x
)
step_in
=
rnn
.
step_input
(
x
)
concat_in
=
layers
.
concat
([
step_in
,
pre_h
],
1
)
concat_in
=
paddle
.
concat
([
step_in
,
pre_h
],
1
)
new_h
=
paddle
.
matmul
(
concat_in
,
w2
)
new_h
=
paddle
.
matmul
(
concat_in
,
w2
)
new_h
=
paddle
.
unsqueeze
(
new_h
,
[
1
])
new_h
=
paddle
.
unsqueeze
(
new_h
,
[
1
])
new_h
,
_
=
dot_attention
(
new_h
,
y
)
new_h
,
_
=
dot_attention
(
new_h
,
y
)
...
...
python/paddle/fluid/tests/unittests/test_reduce_op.py
浏览文件 @
b85af464
...
@@ -1030,8 +1030,8 @@ class TestAllAPI(unittest.TestCase):
...
@@ -1030,8 +1030,8 @@ class TestAllAPI(unittest.TestCase):
for
place
in
self
.
places
:
for
place
in
self
.
places
:
with
fluid
.
dygraph
.
guard
(
place
):
with
fluid
.
dygraph
.
guard
(
place
):
np_x
=
np
.
random
.
randint
(
0
,
2
,
(
12
,
10
)).
astype
(
np
.
bool_
)
np_x
=
np
.
random
.
randint
(
0
,
2
,
(
12
,
10
)).
astype
(
np
.
bool_
)
x
=
fluid
.
layers
.
assign
(
np_x
)
x
=
paddle
.
assign
(
np_x
)
x
=
fluid
.
layers
.
cast
(
x
,
'bool'
)
x
=
paddle
.
cast
(
x
,
'bool'
)
out1
=
paddle
.
all
(
x
)
out1
=
paddle
.
all
(
x
)
np_out1
=
out1
.
numpy
()
np_out1
=
out1
.
numpy
()
...
@@ -1087,8 +1087,8 @@ class TestAnyAPI(unittest.TestCase):
...
@@ -1087,8 +1087,8 @@ class TestAnyAPI(unittest.TestCase):
for
place
in
self
.
places
:
for
place
in
self
.
places
:
with
fluid
.
dygraph
.
guard
(
place
):
with
fluid
.
dygraph
.
guard
(
place
):
np_x
=
np
.
random
.
randint
(
0
,
2
,
(
12
,
10
)).
astype
(
np
.
bool_
)
np_x
=
np
.
random
.
randint
(
0
,
2
,
(
12
,
10
)).
astype
(
np
.
bool_
)
x
=
fluid
.
layers
.
assign
(
np_x
)
x
=
paddle
.
assign
(
np_x
)
x
=
fluid
.
layers
.
cast
(
x
,
'bool'
)
x
=
paddle
.
cast
(
x
,
'bool'
)
out1
=
paddle
.
any
(
x
)
out1
=
paddle
.
any
(
x
)
np_out1
=
out1
.
numpy
()
np_out1
=
out1
.
numpy
()
...
...
python/paddle/fluid/tests/unittests/test_regularizer.py
浏览文件 @
b85af464
...
@@ -239,7 +239,7 @@ class TestRegularizer(unittest.TestCase):
...
@@ -239,7 +239,7 @@ class TestRegularizer(unittest.TestCase):
for
para
in
param_list
:
for
para
in
param_list
:
para_mul
=
paddle
.
square
(
x
=
para
)
para_mul
=
paddle
.
square
(
x
=
para
)
para_sum
.
append
(
paddle
.
sum
(
para_mul
))
para_sum
.
append
(
paddle
.
sum
(
para_mul
))
avg_cost_l2
+=
fluid
.
layers
.
sums
(
para_sum
)
*
0.5
avg_cost_l2
+=
paddle
.
add_n
(
para_sum
)
*
0.5
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.1
)
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.1
)
optimizer
.
minimize
(
avg_cost_l2
)
optimizer
.
minimize
(
avg_cost_l2
)
...
...
python/paddle/fluid/tests/unittests/test_regularizer_api.py
浏览文件 @
b85af464
...
@@ -147,7 +147,7 @@ class TestRegularizer(unittest.TestCase):
...
@@ -147,7 +147,7 @@ class TestRegularizer(unittest.TestCase):
for
para
in
param_list
:
for
para
in
param_list
:
para_mul
=
paddle
.
square
(
x
=
para
)
para_mul
=
paddle
.
square
(
x
=
para
)
para_sum
.
append
(
paddle
.
sum
(
para_mul
))
para_sum
.
append
(
paddle
.
sum
(
para_mul
))
avg_cost_l2
+=
fluid
.
layers
.
sums
(
para_sum
)
*
0.5
avg_cost_l2
+=
paddle
.
add_n
(
para_sum
)
*
0.5
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.1
)
optimizer
=
fluid
.
optimizer
.
Adagrad
(
learning_rate
=
0.1
)
optimizer
.
minimize
(
avg_cost_l2
)
optimizer
.
minimize
(
avg_cost_l2
)
...
...
python/paddle/fluid/tests/unittests/test_stack_op.py
浏览文件 @
b85af464
...
@@ -167,7 +167,7 @@ class TestStackAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -167,7 +167,7 @@ class TestStackAPIWithLoDTensorArray(unittest.TestCase):
def
set_program
(
self
):
def
set_program
(
self
):
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
...
@@ -205,7 +205,7 @@ class TestTensorStackAPIWithLoDTensorArray(unittest.TestCase):
...
@@ -205,7 +205,7 @@ class TestTensorStackAPIWithLoDTensorArray(unittest.TestCase):
def
set_program
(
self
):
def
set_program
(
self
):
self
.
program
=
fluid
.
Program
()
self
.
program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
self
.
program
):
with
fluid
.
program_guard
(
self
.
program
):
input
=
fluid
.
layers
.
assign
(
self
.
x
)
input
=
paddle
.
assign
(
self
.
x
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
tensor_array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
zero
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
"int64"
)
...
...
python/paddle/fluid/tests/unittests/test_static_save_load.py
浏览文件 @
b85af464
...
@@ -114,7 +114,7 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -114,7 +114,7 @@ class SimpleLSTMRNN(fluid.Layer):
weight_1
=
self
.
weight_1_arr
[
k
]
weight_1
=
self
.
weight_1_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
bias
=
self
.
bias_arr
[
k
]
nn
=
fluid
.
layers
.
concat
([
self
.
_input
,
pre_hidden
],
1
)
nn
=
paddle
.
concat
([
self
.
_input
,
pre_hidden
],
1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
matmul
(
x
=
nn
,
y
=
weight_1
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
gate_input
=
paddle
.
add
(
gate_input
,
bias
)
...
@@ -138,14 +138,14 @@ class SimpleLSTMRNN(fluid.Layer):
...
@@ -138,14 +138,14 @@ class SimpleLSTMRNN(fluid.Layer):
res
.
append
(
res
.
append
(
paddle
.
reshape
(
self
.
_input
,
shape
=
[
1
,
-
1
,
self
.
_hidden_size
])
paddle
.
reshape
(
self
.
_input
,
shape
=
[
1
,
-
1
,
self
.
_hidden_size
])
)
)
real_res
=
fluid
.
layers
.
concat
(
res
,
0
)
real_res
=
paddle
.
concat
(
res
,
0
)
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
real_res
=
paddle
.
transpose
(
x
=
real_res
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
fluid
.
layers
.
concat
(
self
.
hidden_array
,
1
)
last_hidden
=
paddle
.
concat
(
self
.
hidden_array
,
1
)
last_hidden
=
paddle
.
reshape
(
last_hidden
=
paddle
.
reshape
(
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_hidden
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_hidden
=
paddle
.
transpose
(
x
=
last_hidden
,
perm
=
[
1
,
0
,
2
])
last_cell
=
fluid
.
layers
.
concat
(
self
.
cell_array
,
1
)
last_cell
=
paddle
.
concat
(
self
.
cell_array
,
1
)
last_cell
=
paddle
.
reshape
(
last_cell
=
paddle
.
reshape
(
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
last_cell
,
shape
=
[
-
1
,
self
.
_num_layers
,
self
.
_hidden_size
]
)
)
...
@@ -216,7 +216,7 @@ class PtbModel(fluid.Layer):
...
@@ -216,7 +216,7 @@ class PtbModel(fluid.Layer):
)
)
# NPU 'tok_k' kernel only support `int32` dtype, so cast `input` from `int64` to `int32`.
# NPU 'tok_k' kernel only support `int32` dtype, so cast `input` from `int64` to `int32`.
input
=
fluid
.
layers
.
cast
(
input
,
"int32"
)
input
=
paddle
.
cast
(
input
,
"int32"
)
x_emb
=
self
.
embedding
(
input
)
x_emb
=
self
.
embedding
(
input
)
x_emb
=
paddle
.
reshape
(
x_emb
=
paddle
.
reshape
(
x_emb
,
shape
=
[
-
1
,
self
.
num_steps
,
self
.
hidden_size
]
x_emb
,
shape
=
[
-
1
,
self
.
num_steps
,
self
.
hidden_size
]
...
...
python/paddle/fluid/tests/unittests/test_sum_op.py
浏览文件 @
b85af464
...
@@ -453,27 +453,28 @@ class TestRaiseSumError(unittest.TestCase):
...
@@ -453,27 +453,28 @@ class TestRaiseSumError(unittest.TestCase):
class
TestRaiseSumsError
(
unittest
.
TestCase
):
class
TestRaiseSumsError
(
unittest
.
TestCase
):
def
test_errors
(
self
):
def
test_errors
(
self
):
def
test_type
():
def
test_type
():
fluid
.
layers
.
sums
([
11
,
22
])
paddle
.
add_n
([
11
,
22
])
self
.
assertRaises
(
TypeError
,
test_type
)
self
.
assertRaises
(
TypeError
,
test_type
)
def
test_dtype
():
def
test_dtype
():
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"int8"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"int8"
)
fluid
.
layers
.
sums
([
data1
,
data2
])
paddle
.
add_n
([
data1
,
data2
])
self
.
assertRaises
(
TypeError
,
test_dtype
)
self
.
assertRaises
(
TypeError
,
test_dtype
)
def
test_dtype1
():
def
test_dtype1
():
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
fluid
.
layers
.
sums
(
data1
)
paddle
.
add_n
(
data1
)
self
.
assertRaises
(
TypeError
,
test_dtype1
)
self
.
assertRaises
(
TypeError
,
test_dtype1
)
def
test_out_type
():
def
test_out_type
():
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
fluid
.
layers
.
sums
([
data1
,
data2
],
out
=
[
10
])
out
=
[
10
]
out
=
paddle
.
add_n
([
data1
,
data2
])
self
.
assertRaises
(
TypeError
,
test_out_type
)
self
.
assertRaises
(
TypeError
,
test_out_type
)
...
@@ -481,7 +482,7 @@ class TestRaiseSumsError(unittest.TestCase):
...
@@ -481,7 +482,7 @@ class TestRaiseSumsError(unittest.TestCase):
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
out
=
fluid
.
data
(
name
=
"out"
,
shape
=
[
10
],
dtype
=
"int8"
)
out
=
fluid
.
data
(
name
=
"out"
,
shape
=
[
10
],
dtype
=
"int8"
)
fluid
.
layers
.
sums
([
data1
,
data2
],
out
=
out
)
out
=
paddle
.
add_n
([
data1
,
data2
]
)
self
.
assertRaises
(
TypeError
,
test_out_dtype
)
self
.
assertRaises
(
TypeError
,
test_out_dtype
)
...
...
python/paddle/fluid/tests/unittests/test_switch.py
浏览文件 @
b85af464
...
@@ -36,13 +36,13 @@ class TestSwitch(unittest.TestCase):
...
@@ -36,13 +36,13 @@ class TestSwitch(unittest.TestCase):
with
layers
.
Switch
()
as
switch
:
with
layers
.
Switch
()
as
switch
:
with
switch
.
case
(
paddle
.
less_than
(
x
,
zero_var
)):
with
switch
.
case
(
paddle
.
less_than
(
x
,
zero_var
)):
layers
.
assign
(
zero_var
,
result
)
paddle
.
assign
(
zero_var
,
result
)
with
switch
.
case
(
paddle
.
less_than
(
x
,
one_var
)):
with
switch
.
case
(
paddle
.
less_than
(
x
,
one_var
)):
layers
.
assign
(
one_var
,
result
)
paddle
.
assign
(
one_var
,
result
)
with
switch
.
case
(
paddle
.
less_than
(
x
,
two_var
)):
with
switch
.
case
(
paddle
.
less_than
(
x
,
two_var
)):
layers
.
assign
(
two_var
,
result
)
paddle
.
assign
(
two_var
,
result
)
with
switch
.
default
():
with
switch
.
default
():
layers
.
assign
(
three_var
,
result
)
paddle
.
assign
(
three_var
,
result
)
cpu
=
core
.
CPUPlace
()
cpu
=
core
.
CPUPlace
()
exe
=
Executor
(
cpu
)
exe
=
Executor
(
cpu
)
...
@@ -79,7 +79,7 @@ class TestSwitchCaseError(unittest.TestCase):
...
@@ -79,7 +79,7 @@ class TestSwitchCaseError(unittest.TestCase):
def
test_condition_type
():
def
test_condition_type
():
with
layers
.
Switch
()
as
switch
:
with
layers
.
Switch
()
as
switch
:
with
switch
.
case
(
1
):
with
switch
.
case
(
1
):
layers
.
assign
(
zero_var
,
result
)
paddle
.
assign
(
zero_var
,
result
)
self
.
assertRaises
(
TypeError
,
test_condition_type
)
self
.
assertRaises
(
TypeError
,
test_condition_type
)
...
@@ -87,7 +87,7 @@ class TestSwitchCaseError(unittest.TestCase):
...
@@ -87,7 +87,7 @@ class TestSwitchCaseError(unittest.TestCase):
def
test_condition_dtype
():
def
test_condition_dtype
():
with
layers
.
Switch
()
as
switch
:
with
layers
.
Switch
()
as
switch
:
with
switch
.
case
(
cond
):
with
switch
.
case
(
cond
):
layers
.
assign
(
zero_var
,
result
)
paddle
.
assign
(
zero_var
,
result
)
self
.
assertRaises
(
TypeError
,
test_condition_dtype
)
self
.
assertRaises
(
TypeError
,
test_condition_dtype
)
...
...
python/paddle/fluid/tests/unittests/test_sync_batch_norm_op.py
浏览文件 @
b85af464
...
@@ -91,9 +91,9 @@ class TestSyncBatchNormOpTraining(unittest.TestCase):
...
@@ -91,9 +91,9 @@ class TestSyncBatchNormOpTraining(unittest.TestCase):
is_test
=
only_forward
,
is_test
=
only_forward
,
)
)
if
core
.
is_compiled_with_rocm
():
if
core
.
is_compiled_with_rocm
():
bn
=
fluid
.
layers
.
cast
(
bn
,
'float32'
)
bn
=
paddle
.
cast
(
bn
,
'float32'
)
else
:
else
:
bn
=
fluid
.
layers
.
cast
(
bn
,
'float64'
)
bn
=
paddle
.
cast
(
bn
,
'float64'
)
sigmoid
=
paddle
.
nn
.
functional
.
sigmoid
(
bn
)
sigmoid
=
paddle
.
nn
.
functional
.
sigmoid
(
bn
)
out
=
paddle
.
sum
(
sigmoid
)
out
=
paddle
.
sum
(
sigmoid
)
if
not
sync_bn
:
if
not
sync_bn
:
...
...
python/paddle/fluid/tests/unittests/test_tensor_array_to_tensor.py
浏览文件 @
b85af464
...
@@ -197,7 +197,7 @@ class TestLoDTensorArrayStack(unittest.TestCase):
...
@@ -197,7 +197,7 @@ class TestLoDTensorArrayStack(unittest.TestCase):
self
.
array
=
array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
self
.
array
=
array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
idx
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
"int64"
,
value
=
0
)
idx
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
"int64"
,
value
=
0
)
for
i
,
x
in
enumerate
(
self
.
inputs
):
for
i
,
x
in
enumerate
(
self
.
inputs
):
x
=
fluid
.
layers
.
assign
(
x
)
x
=
paddle
.
assign
(
x
)
paddle
.
tensor
.
array_write
(
x
,
idx
+
i
,
array
)
paddle
.
tensor
.
array_write
(
x
,
idx
+
i
,
array
)
output
,
output_index
=
tensor_array_to_tensor
(
output
,
output_index
=
tensor_array_to_tensor
(
input
=
array
,
**
self
.
attrs
input
=
array
,
**
self
.
attrs
...
@@ -234,9 +234,9 @@ class TestLoDTensorArrayStack(unittest.TestCase):
...
@@ -234,9 +234,9 @@ class TestLoDTensorArrayStack(unittest.TestCase):
class
TestTensorArrayToTensorAPI
(
unittest
.
TestCase
):
class
TestTensorArrayToTensorAPI
(
unittest
.
TestCase
):
def
_test_case
(
self
,
inp1
,
inp2
):
def
_test_case
(
self
,
inp1
,
inp2
):
x0
=
fluid
.
layers
.
assign
(
inp1
)
x0
=
paddle
.
assign
(
inp1
)
x0
.
stop_gradient
=
False
x0
.
stop_gradient
=
False
x1
=
fluid
.
layers
.
assign
(
inp2
)
x1
=
paddle
.
assign
(
inp2
)
x1
.
stop_gradient
=
False
x1
.
stop_gradient
=
False
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
"int64"
,
value
=
0
)
i
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
"int64"
,
value
=
0
)
array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
...
@@ -278,7 +278,7 @@ class TestTensorArrayToTensorAPI(unittest.TestCase):
...
@@ -278,7 +278,7 @@ class TestTensorArrayToTensorAPI(unittest.TestCase):
ten
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
10
)
ten
=
fluid
.
layers
.
fill_constant
(
shape
=
[
1
],
dtype
=
'int64'
,
value
=
10
)
array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
array
=
paddle
.
tensor
.
create_array
(
dtype
=
'float32'
)
inp0
=
np
.
random
.
rand
(
2
,
3
,
4
).
astype
(
"float32"
)
inp0
=
np
.
random
.
rand
(
2
,
3
,
4
).
astype
(
"float32"
)
x0
=
fluid
.
layers
.
assign
(
inp0
)
x0
=
paddle
.
assign
(
inp0
)
paddle
.
tensor
.
array_write
(
x0
,
zero
,
array
)
paddle
.
tensor
.
array_write
(
x0
,
zero
,
array
)
def
cond
(
i
,
end
,
array
):
def
cond
(
i
,
end
,
array
):
...
...
python/paddle/fluid/tests/unittests/test_variable.py
浏览文件 @
b85af464
...
@@ -150,7 +150,7 @@ class TestVariable(unittest.TestCase):
...
@@ -150,7 +150,7 @@ class TestVariable(unittest.TestCase):
[[
19
,
20
,
21
],
[
22
,
23
,
24
],
[
25
,
26
,
27
]],
[[
19
,
20
,
21
],
[
22
,
23
,
24
],
[
25
,
26
,
27
]],
]
]
).
astype
(
'float32'
)
).
astype
(
'float32'
)
var
=
fluid
.
layers
.
assign
(
tensor_array
)
var
=
paddle
.
assign
(
tensor_array
)
var1
=
var
[
0
,
1
,
1
]
var1
=
var
[
0
,
1
,
1
]
var2
=
var
[
1
:]
var2
=
var
[
1
:]
var3
=
var
[
0
:
1
]
var3
=
var
[
0
:
1
]
...
...
python/paddle/fluid/tests/unittests/test_weight_decay.py
浏览文件 @
b85af464
...
@@ -169,7 +169,7 @@ class TestWeightDecay(unittest.TestCase):
...
@@ -169,7 +169,7 @@ class TestWeightDecay(unittest.TestCase):
for
params
in
param_list
:
for
params
in
param_list
:
updated_p
=
paddle
.
subtract
(
x
=
params
[
0
],
y
=
params
[
1
])
updated_p
=
paddle
.
subtract
(
x
=
params
[
0
],
y
=
params
[
1
])
fluid
.
layers
.
assign
(
input
=
updated_p
,
output
=
params
[
0
])
paddle
.
assign
(
updated_p
,
output
=
params
[
0
])
if
use_parallel_exe
:
if
use_parallel_exe
:
loss
=
self
.
run_parallel_exe
(
loss
=
self
.
run_parallel_exe
(
...
...
python/paddle/fluid/tests/unittests/test_where_op.py
浏览文件 @
b85af464
...
@@ -350,7 +350,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
...
@@ -350,7 +350,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
y
=
paddle
.
where
(
x
)
y
=
paddle
.
where
(
x
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
2
)
self
.
assertEqual
(
len
(
y
),
2
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
@@ -364,7 +364,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
...
@@ -364,7 +364,7 @@ class TestWhereDygraphAPI(unittest.TestCase):
y
=
paddle
.
where
(
x
)
y
=
paddle
.
where
(
x
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
type
(
y
),
tuple
)
self
.
assertEqual
(
len
(
y
),
1
)
self
.
assertEqual
(
len
(
y
),
1
)
z
=
fluid
.
layers
.
concat
(
list
(
y
),
axis
=
1
)
z
=
paddle
.
concat
(
list
(
y
),
axis
=
1
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
(
res
,)
=
exe
.
run
(
(
res
,)
=
exe
.
run
(
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
feed
=
{
'x'
:
data
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
...
...
python/paddle/fluid/tests/unittests/test_while_op.py
浏览文件 @
b85af464
...
@@ -55,7 +55,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -55,7 +55,7 @@ class TestWhileOp(unittest.TestCase):
with
while_op
.
block
():
with
while_op
.
block
():
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
result
=
layers
.
sums
(
input
=
[
d
,
prev
])
result
=
paddle
.
add_n
(
[
d
,
prev
])
i
=
paddle
.
increment
(
x
=
i
)
i
=
paddle
.
increment
(
x
=
i
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
...
@@ -64,7 +64,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -64,7 +64,7 @@ class TestWhileOp(unittest.TestCase):
with
while_op2
.
block
():
with
while_op2
.
block
():
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
result2
=
layers
.
sums
(
input
=
[
d2
,
prev2
])
result2
=
paddle
.
add_n
(
[
d2
,
prev2
])
j
=
paddle
.
increment
(
x
=
j
)
j
=
paddle
.
increment
(
x
=
j
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
...
@@ -117,7 +117,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -117,7 +117,7 @@ class TestWhileOp(unittest.TestCase):
cond
=
paddle
.
less_than
(
x
=
i
,
y
=
array_len
)
cond
=
paddle
.
less_than
(
x
=
i
,
y
=
array_len
)
with
self
.
assertRaises
(
TypeError
):
with
self
.
assertRaises
(
TypeError
):
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
cond
=
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
with
self
.
assertRaises
(
TypeError
):
with
self
.
assertRaises
(
TypeError
):
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
...
@@ -149,7 +149,8 @@ class TestIgnoreVarNameInWhile(unittest.TestCase):
...
@@ -149,7 +149,8 @@ class TestIgnoreVarNameInWhile(unittest.TestCase):
y
=
paddle
.
static
.
data
(
name
=
'y'
,
shape
=
[
-
1
,
1
,
1
],
dtype
=
'float32'
)
y
=
paddle
.
static
.
data
(
name
=
'y'
,
shape
=
[
-
1
,
1
,
1
],
dtype
=
'float32'
)
x
.
desc
.
set_need_check_feed
(
False
)
x
.
desc
.
set_need_check_feed
(
False
)
y
.
desc
.
set_need_check_feed
(
False
)
y
.
desc
.
set_need_check_feed
(
False
)
temp
=
layers
.
concat
(
input
=
[
x
,
y
],
axis
=-
1
)
temp
=
paddle
.
concat
([
x
,
y
],
axis
=-
1
)
i
=
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
'int32'
)
i
=
layers
.
fill_constant
(
shape
=
[
1
],
value
=
0
,
dtype
=
'int32'
)
num
=
layers
.
fill_constant
(
shape
=
[
1
],
value
=
5
,
dtype
=
'int32'
)
num
=
layers
.
fill_constant
(
shape
=
[
1
],
value
=
5
,
dtype
=
'int32'
)
...
...
python/paddle/fluid/tests/unittests/xpu/test_assign_value_op_xpu.py
浏览文件 @
b85af464
...
@@ -28,7 +28,6 @@ from xpu.get_test_cover_info import (
...
@@ -28,7 +28,6 @@ from xpu.get_test_cover_info import (
import
paddle
import
paddle
import
paddle.fluid
as
fluid
import
paddle.fluid
as
fluid
import
paddle.fluid.framework
as
framework
import
paddle.fluid.framework
as
framework
import
paddle.fluid.layers
as
layers
paddle
.
enable_static
()
paddle
.
enable_static
()
...
@@ -95,7 +94,7 @@ class TestAssignApi(unittest.TestCase):
...
@@ -95,7 +94,7 @@ class TestAssignApi(unittest.TestCase):
main_program
=
fluid
.
Program
()
main_program
=
fluid
.
Program
()
with
fluid
.
program_guard
(
main_program
):
with
fluid
.
program_guard
(
main_program
):
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
x
=
paddle
.
tensor
.
create_tensor
(
dtype
=
self
.
dtype
)
layers
.
assign
(
input
=
self
.
value
,
output
=
x
)
paddle
.
assign
(
self
.
value
,
output
=
x
)
exe
=
fluid
.
Executor
(
self
.
place
)
exe
=
fluid
.
Executor
(
self
.
place
)
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
[
fetched_x
]
=
exe
.
run
(
main_program
,
feed
=
{},
fetch_list
=
[
x
])
...
...
python/paddle/fluid/tests/unittests/xpu/test_cast_op_xpu.py
浏览文件 @
b85af464
...
@@ -98,7 +98,7 @@ class TestCastOpError(unittest.TestCase):
...
@@ -98,7 +98,7 @@ class TestCastOpError(unittest.TestCase):
x1
=
fluid
.
create_lod_tensor
(
x1
=
fluid
.
create_lod_tensor
(
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
XPUPlace
(
0
)
np
.
array
([[
-
1
]]),
[[
1
]],
fluid
.
XPUPlace
(
0
)
)
)
self
.
assertRaises
(
TypeError
,
fluid
.
layers
.
cast
,
x1
,
'int32'
)
self
.
assertRaises
(
TypeError
,
paddle
.
cast
,
x1
,
'int32'
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
python/paddle/fluid/tests/unittests/xpu/test_one_hot_v2_op_xpu.py
浏览文件 @
b85af464
...
@@ -153,7 +153,7 @@ class TestOneHotOpApi(unittest.TestCase):
...
@@ -153,7 +153,7 @@ class TestOneHotOpApi(unittest.TestCase):
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_depthTensor
(
self
):
def
test_api_with_depthTensor
(
self
):
depth
=
fluid
.
layers
.
assign
(
input
=
np
.
array
([
10
],
dtype
=
np
.
int32
))
depth
=
paddle
.
assign
(
np
.
array
([
10
],
dtype
=
np
.
int32
))
self
.
_run
(
depth
)
self
.
_run
(
depth
)
def
test_api_with_dygraph
(
self
):
def
test_api_with_dygraph
(
self
):
...
...
python/paddle/fluid/tests/unittests/xpu/test_sum_op_xpu.py
浏览文件 @
b85af464
...
@@ -151,27 +151,28 @@ class TestRaiseSumError(unittest.TestCase):
...
@@ -151,27 +151,28 @@ class TestRaiseSumError(unittest.TestCase):
class
TestRaiseSumsError
(
unittest
.
TestCase
):
class
TestRaiseSumsError
(
unittest
.
TestCase
):
def
test_errors
(
self
):
def
test_errors
(
self
):
def
test_type
():
def
test_type
():
fluid
.
layers
.
sums
([
11
,
22
])
paddle
.
add_n
([
11
,
22
])
self
.
assertRaises
(
TypeError
,
test_type
)
self
.
assertRaises
(
TypeError
,
test_type
)
def
test_dtype
():
def
test_dtype
():
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"int8"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"int8"
)
fluid
.
layers
.
sums
([
data1
,
data2
])
paddle
.
add_n
([
data1
,
data2
])
self
.
assertRaises
(
TypeError
,
test_dtype
)
self
.
assertRaises
(
TypeError
,
test_dtype
)
def
test_dtype1
():
def
test_dtype1
():
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"int8"
)
fluid
.
layers
.
sums
(
data1
)
paddle
.
add_n
(
data1
)
self
.
assertRaises
(
TypeError
,
test_dtype1
)
self
.
assertRaises
(
TypeError
,
test_dtype1
)
def
test_out_type
():
def
test_out_type
():
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
fluid
.
layers
.
sums
([
data1
,
data2
],
out
=
[
10
])
out
=
[
10
]
out
=
paddle
.
add_n
([
data1
,
data2
])
self
.
assertRaises
(
TypeError
,
test_out_type
)
self
.
assertRaises
(
TypeError
,
test_out_type
)
...
@@ -179,7 +180,7 @@ class TestRaiseSumsError(unittest.TestCase):
...
@@ -179,7 +180,7 @@ class TestRaiseSumsError(unittest.TestCase):
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data1
=
fluid
.
data
(
name
=
"input1"
,
shape
=
[
10
],
dtype
=
"flaot32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
data2
=
fluid
.
data
(
name
=
"input2"
,
shape
=
[
10
],
dtype
=
"float32"
)
out
=
fluid
.
data
(
name
=
"out"
,
shape
=
[
10
],
dtype
=
"int8"
)
out
=
fluid
.
data
(
name
=
"out"
,
shape
=
[
10
],
dtype
=
"int8"
)
fluid
.
layers
.
sums
([
data1
,
data2
],
out
=
out
)
out
=
paddle
.
add_n
([
data1
,
data2
]
)
self
.
assertRaises
(
TypeError
,
test_out_dtype
)
self
.
assertRaises
(
TypeError
,
test_out_dtype
)
...
...
python/paddle/fluid/tests/unittests/xpu/test_while_op_xpu.py
浏览文件 @
b85af464
...
@@ -54,7 +54,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -54,7 +54,7 @@ class TestWhileOp(unittest.TestCase):
with
while_op
.
block
():
with
while_op
.
block
():
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
d
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
prev
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
i
)
result
=
layers
.
sums
(
input
=
[
d
,
prev
])
result
=
paddle
.
add_n
(
[
d
,
prev
])
i
=
paddle
.
increment
(
x
=
i
)
i
=
paddle
.
increment
(
x
=
i
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result
,
i
=
i
,
array
=
mem_array
)
...
@@ -63,7 +63,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -63,7 +63,7 @@ class TestWhileOp(unittest.TestCase):
with
while_op2
.
block
():
with
while_op2
.
block
():
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
d2
=
paddle
.
tensor
.
array_read
(
array
=
data_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
prev2
=
paddle
.
tensor
.
array_read
(
array
=
mem_array
,
i
=
j
)
result2
=
layers
.
sums
(
input
=
[
d2
,
prev2
])
result2
=
paddle
.
add_n
(
[
d2
,
prev2
])
j
=
paddle
.
increment
(
x
=
j
)
j
=
paddle
.
increment
(
x
=
j
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
paddle
.
tensor
.
array_write
(
result2
,
i
=
j
,
array
=
mem_array
)
...
@@ -116,7 +116,7 @@ class TestWhileOp(unittest.TestCase):
...
@@ -116,7 +116,7 @@ class TestWhileOp(unittest.TestCase):
cond
=
paddle
.
less_than
(
x
=
i
,
y
=
array_len
)
cond
=
paddle
.
less_than
(
x
=
i
,
y
=
array_len
)
with
self
.
assertRaises
(
TypeError
):
with
self
.
assertRaises
(
TypeError
):
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
cond
=
layers
.
cast
(
cond
,
dtype
=
'float64'
)
cond
=
paddle
.
cast
(
cond
,
dtype
=
'float64'
)
with
self
.
assertRaises
(
TypeError
):
with
self
.
assertRaises
(
TypeError
):
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
paddle
.
static
.
nn
.
control_flow
.
While
(
cond
=
cond
)
...
...
python/paddle/fluid/variable_index.py
浏览文件 @
b85af464
...
@@ -473,10 +473,9 @@ def _getitem_impl_(var, item):
...
@@ -473,10 +473,9 @@ def _getitem_impl_(var, item):
new_slice_item
.
append
(
0
)
new_slice_item
.
append
(
0
)
slice_item
=
new_slice_item
slice_item
=
new_slice_item
from
.layers
import
assign
from
..tensor
import
index_select
from
..tensor
import
index_select
idx
=
assign
(
np
.
array
(
slice_item
).
astype
(
"int32"
))
idx
=
paddle
.
assign
(
np
.
array
(
slice_item
).
astype
(
"int32"
))
return
index_select
(
var
,
index
=
idx
,
axis
=
0
)
return
index_select
(
var
,
index
=
idx
,
axis
=
0
)
elif
isinstance
(
slice_item
,
(
Variable
,
core
.
eager
.
Tensor
)):
elif
isinstance
(
slice_item
,
(
Variable
,
core
.
eager
.
Tensor
)):
...
@@ -720,9 +719,7 @@ def _setitem_impl_(var, item, value):
...
@@ -720,9 +719,7 @@ def _setitem_impl_(var, item, value):
)
)
)
)
from
.layers
import
assign
idx_tensor
=
paddle
.
assign
(
slice_item
)
idx_tensor
=
assign
(
slice_item
)
return
set_value_for_bool_tensor
(
var
,
idx_tensor
,
value
)
return
set_value_for_bool_tensor
(
var
,
idx_tensor
,
value
)
elif
isinstance
(
slice_item
,
Variable
):
elif
isinstance
(
slice_item
,
Variable
):
...
@@ -862,11 +859,10 @@ def set_value_for_bool_tensor(var, item, value):
...
@@ -862,11 +859,10 @@ def set_value_for_bool_tensor(var, item, value):
def
idx_not_empty
(
var
,
item
,
value
):
def
idx_not_empty
(
var
,
item
,
value
):
from
.framework
import
Variable
from
.framework
import
Variable
from
.layers
import
assign
from
..tensor
import
gather_nd
,
scatter_nd_add
from
..tensor
import
gather_nd
,
scatter_nd_add
if
not
isinstance
(
value
,
Variable
):
if
not
isinstance
(
value
,
Variable
):
value
=
assign
(
value
).
cast
(
var
.
dtype
)
value
=
paddle
.
assign
(
value
).
cast
(
var
.
dtype
)
idx
=
paddle
.
nonzero
(
item
)
idx
=
paddle
.
nonzero
(
item
)
gather_val
=
gather_nd
(
var
,
idx
)
gather_val
=
gather_nd
(
var
,
idx
)
...
...
python/paddle/geometric/message_passing/utils.py
浏览文件 @
b85af464
...
@@ -17,7 +17,6 @@ import numpy as np
...
@@ -17,7 +17,6 @@ import numpy as np
import
paddle
import
paddle
from
paddle.fluid.data_feeder
import
check_dtype
,
convert_dtype
from
paddle.fluid.data_feeder
import
check_dtype
,
convert_dtype
from
paddle.fluid.framework
import
Variable
from
paddle.fluid.framework
import
Variable
from
paddle.fluid.layers.tensor
import
cast
def
convert_out_size_to_list
(
out_size
):
def
convert_out_size_to_list
(
out_size
):
...
@@ -53,7 +52,7 @@ def get_out_size_tensor_inputs(inputs, attrs, out_size, op_type):
...
@@ -53,7 +52,7 @@ def get_out_size_tensor_inputs(inputs, attrs, out_size, op_type):
'(When type of out_size in'
+
op_type
+
' is Variable.)'
,
'(When type of out_size in'
+
op_type
+
' is Variable.)'
,
)
)
if
convert_dtype
(
out_size
.
dtype
)
==
'int64'
:
if
convert_dtype
(
out_size
.
dtype
)
==
'int64'
:
out_size
=
cast
(
out_size
,
'int32'
)
out_size
=
paddle
.
cast
(
out_size
,
'int32'
)
inputs
[
"Out_size"
]
=
out_size
inputs
[
"Out_size"
]
=
out_size
else
:
else
:
raise
TypeError
(
"Out_size only supports Variable or int."
)
raise
TypeError
(
"Out_size only supports Variable or int."
)
...
...
python/paddle/incubate/autograd/composite_rules.py
浏览文件 @
b85af464
...
@@ -18,6 +18,8 @@
...
@@ -18,6 +18,8 @@
# ops.yaml or legacy_ops.yaml.
# ops.yaml or legacy_ops.yaml.
import
paddle
from
.primitives
import
*
# noqa: F403
from
.primitives
import
*
# noqa: F403
from
.primreg
import
REGISTER_COMPOSITE
,
lookup_composite
from
.primreg
import
REGISTER_COMPOSITE
,
lookup_composite
...
@@ -93,10 +95,12 @@ def composite_batchnorm(
...
@@ -93,10 +95,12 @@ def composite_batchnorm(
y
=
reshape
(
scale
,
stats_shape
)
*
x_hat
+
reshape
(
bias
,
stats_shape
)
y
=
reshape
(
scale
,
stats_shape
)
*
x_hat
+
reshape
(
bias
,
stats_shape
)
# add op assign to detach tensor in void unsafe change outside the rule.
# add op assign to detach tensor in void unsafe change outside the rule.
batch_mean_
=
assign
(
reshape
(
batch_mean
,
run_mean
.
shape
))
batch_var_
=
assign
(
reshape
(
batch_var
,
run_var
.
shape
))
batch_mean_
=
paddle
.
assign
(
batch_mean
)
run_mean_
=
assign
(
run_mean
)
batch_var_
=
paddle
.
assign
(
batch_var
)
run_var_
=
assign
(
run_var
)
run_mean_
=
paddle
.
assign
(
run_mean
)
run_var_
=
paddle
.
assign
(
run_var
)
if
trainable_statistics
or
not
is_test
:
if
trainable_statistics
or
not
is_test
:
return
run_mean_
,
None
,
batch_mean_
,
batch_var_
,
run_var_
,
y
return
run_mean_
,
None
,
batch_mean_
,
batch_var_
,
run_var_
,
y
else
:
else
:
...
...
python/paddle/incubate/autograd/primitives.py
浏览文件 @
b85af464
...
@@ -11,8 +11,6 @@
...
@@ -11,8 +11,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.
from
paddle.fluid.layers.tensor
import
assign
# noqa: F401
from
paddle.fluid.layers.tensor
import
cast
# noqa: F401
from
paddle.fluid.layers.tensor
import
fill_constant
# noqa: F401
from
paddle.fluid.layers.tensor
import
fill_constant
# noqa: F401
from
paddle.tensor
import
abs
# noqa: F401
from
paddle.tensor
import
abs
# noqa: F401
from
paddle.tensor
import
acos
# noqa: F401
from
paddle.tensor
import
acos
# noqa: F401
...
@@ -58,6 +56,8 @@ from paddle.tensor import sum # noqa: F401
...
@@ -58,6 +56,8 @@ from paddle.tensor import sum # noqa: F401
from
paddle.tensor
import
tan
# noqa: F401
from
paddle.tensor
import
tan
# noqa: F401
from
paddle.tensor
import
tanh
# noqa: F401
from
paddle.tensor
import
tanh
# noqa: F401
from
paddle.tensor
import
zeros
# noqa: F401
from
paddle.tensor
import
zeros
# noqa: F401
from
paddle.tensor.creation
import
assign
# noqa: F401
from
paddle.tensor.manipulation
import
cast
# noqa: F401
math_op
=
[
math_op
=
[
'add'
,
'add'
,
...
@@ -109,9 +109,9 @@ sub_prim = [
...
@@ -109,9 +109,9 @@ sub_prim = [
]
]
others
=
[
others
=
[
'cast'
,
'broadcast_to'
,
'assign'
,
'assign'
,
'broadcast_to'
,
'cast'
,
'fill_constant'
,
'fill_constant'
,
'reshape'
,
'reshape'
,
'full'
,
'full'
,
...
...
python/paddle/incubate/operators/graph_send_recv.py
浏览文件 @
b85af464
...
@@ -14,6 +14,7 @@
...
@@ -14,6 +14,7 @@
import
numpy
as
np
import
numpy
as
np
import
paddle
import
paddle.utils.deprecated
as
deprecated
import
paddle.utils.deprecated
as
deprecated
from
paddle
import
_C_ops
from
paddle
import
_C_ops
from
paddle.fluid.data_feeder
import
(
from
paddle.fluid.data_feeder
import
(
...
@@ -24,7 +25,6 @@ from paddle.fluid.data_feeder import (
...
@@ -24,7 +25,6 @@ from paddle.fluid.data_feeder import (
)
)
from
paddle.fluid.framework
import
Variable
,
in_dygraph_mode
from
paddle.fluid.framework
import
Variable
,
in_dygraph_mode
from
paddle.fluid.layer_helper
import
LayerHelper
from
paddle.fluid.layer_helper
import
LayerHelper
from
paddle.fluid.layers.tensor
import
cast
@
deprecated
(
@
deprecated
(
...
@@ -205,7 +205,7 @@ def get_out_size_tensor_inputs(inputs, attrs, out_size, op_type):
...
@@ -205,7 +205,7 @@ def get_out_size_tensor_inputs(inputs, attrs, out_size, op_type):
'(When type of out_size in'
+
op_type
+
' is Variable.)'
,
'(When type of out_size in'
+
op_type
+
' is Variable.)'
,
)
)
if
convert_dtype
(
out_size
.
dtype
)
==
'int64'
:
if
convert_dtype
(
out_size
.
dtype
)
==
'int64'
:
out_size
=
cast
(
out_size
,
'int32'
)
out_size
=
paddle
.
cast
(
out_size
,
'int32'
)
inputs
[
"Out_size"
]
=
out_size
inputs
[
"Out_size"
]
=
out_size
else
:
else
:
raise
TypeError
(
"Out_size only supports Variable or int."
)
raise
TypeError
(
"Out_size only supports Variable or int."
)
python/paddle/incubate/optimizer/modelaverage.py
浏览文件 @
b85af464
...
@@ -14,7 +14,7 @@
...
@@ -14,7 +14,7 @@
import
paddle
import
paddle
from
paddle
import
_C_ops
,
_legacy_C_ops
from
paddle
import
_C_ops
,
_legacy_C_ops
from
paddle.fluid
import
framework
,
layers
from
paddle.fluid
import
framework
from
paddle.fluid.dygraph
import
base
as
imperative_base
from
paddle.fluid.dygraph
import
base
as
imperative_base
from
paddle.fluid.framework
import
Program
,
in_dygraph_mode
from
paddle.fluid.framework
import
Program
,
in_dygraph_mode
from
paddle.fluid.layer_helper
import
LayerHelper
from
paddle.fluid.layer_helper
import
LayerHelper
...
@@ -546,14 +546,14 @@ class ModelAverage(Optimizer):
...
@@ -546,14 +546,14 @@ class ModelAverage(Optimizer):
self
.
_get_accumulator
(
'old_num_accumulates'
,
param
)
self
.
_get_accumulator
(
'old_num_accumulates'
,
param
)
)
)
# backup param value to grad
# backup param value to grad
layers
.
assign
(
input
=
param
,
output
=
grad
)
paddle
.
assign
(
param
,
output
=
grad
)
# param = (sum_1 + sum_2 + sum_3) / (num_accumulates + old_num_accumulates)
# param = (sum_1 + sum_2 + sum_3) / (num_accumulates + old_num_accumulates)
tmp
=
paddle
.
add_n
([
num_accumulates
,
old_num_accumulates
])
tmp
=
paddle
.
add_n
([
num_accumulates
,
old_num_accumulates
])
sum
=
paddle
.
add_n
([
sum_1
,
sum_2
,
sum_3
])
sum
=
paddle
.
add_n
([
sum_1
,
sum_2
,
sum_3
])
tmp
=
layers
.
cast
(
tmp
=
paddle
.
cast
(
x
=
tmp
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
x
=
tmp
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
)
)
sum
=
layers
.
cast
(
sum
=
paddle
.
cast
(
x
=
sum
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
x
=
sum
,
dtype
=
'float32'
if
self
.
_dtype
is
None
else
self
.
_dtype
)
)
paddle
.
tensor
.
ops
.
_elementwise_div
(
x
=
sum
,
y
=
tmp
,
out
=
param
)
paddle
.
tensor
.
ops
.
_elementwise_div
(
x
=
sum
,
y
=
tmp
,
out
=
param
)
...
@@ -561,4 +561,4 @@ class ModelAverage(Optimizer):
...
@@ -561,4 +561,4 @@ class ModelAverage(Optimizer):
def
_add_average_restore_op
(
self
,
block
,
param
):
def
_add_average_restore_op
(
self
,
block
,
param
):
param
=
block
.
_clone_variable
(
param
)
param
=
block
.
_clone_variable
(
param
)
grad
=
block
.
_clone_variable
(
self
.
_get_accumulator
(
'restore'
,
param
))
grad
=
block
.
_clone_variable
(
self
.
_get_accumulator
(
'restore'
,
param
))
layers
.
assign
(
input
=
grad
,
output
=
param
)
paddle
.
assign
(
grad
,
output
=
param
)
python/paddle/jit/dy2static/convert_operators.py
浏览文件 @
b85af464
...
@@ -17,7 +17,7 @@ import re
...
@@ -17,7 +17,7 @@ import re
import
paddle
import
paddle
from
paddle.fluid.data_feeder
import
convert_dtype
from
paddle.fluid.data_feeder
import
convert_dtype
from
paddle.fluid.framework
import
Variable
,
core
from
paddle.fluid.framework
import
Variable
,
core
from
paddle.fluid.layers
import
Print
,
assign
,
cast
,
control_flow
,
fill_constant
from
paddle.fluid.layers
import
Print
,
control_flow
,
fill_constant
from
paddle.fluid.layers.control_flow
import
while_loop
from
paddle.fluid.layers.control_flow
import
while_loop
from
paddle.fluid.layers.utils
import
copy_mutable_vars
from
paddle.fluid.layers.utils
import
copy_mutable_vars
from
paddle.jit.dy2static.utils
import
(
from
paddle.jit.dy2static.utils
import
(
...
@@ -675,7 +675,7 @@ def convert_shape_compare(left, *args):
...
@@ -675,7 +675,7 @@ def convert_shape_compare(left, *args):
def
cast_bool_if_necessary
(
var
):
def
cast_bool_if_necessary
(
var
):
assert
isinstance
(
var
,
Variable
)
assert
isinstance
(
var
,
Variable
)
if
convert_dtype
(
var
.
dtype
)
not
in
[
'bool'
]:
if
convert_dtype
(
var
.
dtype
)
not
in
[
'bool'
]:
var
=
cast
(
var
,
dtype
=
"bool"
)
var
=
paddle
.
cast
(
var
,
dtype
=
"bool"
)
return
var
return
var
...
@@ -705,7 +705,7 @@ def convert_var_dtype(var, dtype):
...
@@ -705,7 +705,7 @@ def convert_var_dtype(var, dtype):
'int'
:
'int32'
,
'int'
:
'int32'
,
'float'
:
'float32'
,
'float'
:
'float32'
,
}
}
return
cast
(
var
,
dtype
=
cast_map
[
dtype
])
return
paddle
.
cast
(
var
,
dtype
=
cast_map
[
dtype
])
else
:
else
:
return
eval
(
'{}(var)'
.
format
(
dtype
))
return
eval
(
'{}(var)'
.
format
(
dtype
))
...
@@ -715,7 +715,7 @@ def convert_assert(cond, message=""):
...
@@ -715,7 +715,7 @@ def convert_assert(cond, message=""):
A function representation of a Python ``assert`` statement.
A function representation of a Python ``assert`` statement.
"""
"""
if
isinstance
(
cond
,
Variable
):
if
isinstance
(
cond
,
Variable
):
cond
=
cast
(
cond
,
"bool"
)
cond
=
paddle
.
cast
(
cond
,
"bool"
)
# NOTE: message is not used because Paddle Assert has no corresponding parameter to use.
# NOTE: message is not used because Paddle Assert has no corresponding parameter to use.
from
paddle.static.nn.control_flow
import
Assert
from
paddle.static.nn.control_flow
import
Assert
...
@@ -788,7 +788,7 @@ def _run_paddle_pop(array, *args):
...
@@ -788,7 +788,7 @@ def _run_paddle_pop(array, *args):
new_array
=
_slice_tensor_array
(
array
,
0
,
idx
)
new_array
=
_slice_tensor_array
(
array
,
0
,
idx
)
i
=
idx
+
1
i
=
idx
+
1
_
,
new_array
=
while_loop
(
cond
,
body
,
[
i
,
new_array
])
_
,
new_array
=
while_loop
(
cond
,
body
,
[
i
,
new_array
])
assign
(
input
=
new_array
,
output
=
array
)
paddle
.
assign
(
new_array
,
output
=
array
)
return
pop_item
return
pop_item
...
...
python/paddle/jit/dy2static/utils.py
浏览文件 @
b85af464
...
@@ -33,7 +33,6 @@ import paddle
...
@@ -33,7 +33,6 @@ import paddle
from
paddle.fluid
import
core
,
unique_name
from
paddle.fluid
import
core
,
unique_name
from
paddle.fluid.data_feeder
import
convert_dtype
from
paddle.fluid.data_feeder
import
convert_dtype
from
paddle.fluid.layer_helper
import
LayerHelper
from
paddle.fluid.layer_helper
import
LayerHelper
from
paddle.fluid.layers
import
assign
from
paddle.utils
import
gast
from
paddle.utils
import
gast
__all__
=
[]
__all__
=
[]
...
@@ -156,7 +155,7 @@ def create_undefined_variable():
...
@@ -156,7 +155,7 @@ def create_undefined_variable():
helper
=
LayerHelper
(
'create_undefined_variable'
,
**
locals
())
helper
=
LayerHelper
(
'create_undefined_variable'
,
**
locals
())
saved_block_ids
=
helper
.
main_program
.
current_block_idx
saved_block_ids
=
helper
.
main_program
.
current_block_idx
helper
.
main_program
.
current_block_idx
=
0
helper
.
main_program
.
current_block_idx
=
0
assign
(
RETURN_NO_VALUE_MAGIC_NUM
,
var
)
paddle
.
assign
(
RETURN_NO_VALUE_MAGIC_NUM
,
var
)
helper
.
main_program
.
current_block_idx
=
saved_block_ids
helper
.
main_program
.
current_block_idx
=
saved_block_ids
return
var
return
var
...
...
python/paddle/nn/clip.py
浏览文件 @
b85af464
...
@@ -19,7 +19,7 @@ import paddle
...
@@ -19,7 +19,7 @@ import paddle
import
paddle.autograd
as
imperative_base
import
paddle.autograd
as
imperative_base
from
paddle
import
_C_ops
,
_legacy_C_ops
from
paddle
import
_C_ops
,
_legacy_C_ops
from
paddle.common_ops_import
import
Variable
,
check_type
,
default_main_program
from
paddle.common_ops_import
import
Variable
,
check_type
,
default_main_program
from
paddle.fluid
import
core
,
framework
,
layers
,
unique_name
from
paddle.fluid
import
core
,
framework
,
unique_name
from
paddle.fluid.data_feeder
import
check_variable_and_dtype
from
paddle.fluid.data_feeder
import
check_variable_and_dtype
from
paddle.framework
import
LayerHelper
,
_non_static_mode
,
in_dygraph_mode
from
paddle.framework
import
LayerHelper
,
_non_static_mode
,
in_dygraph_mode
from
paddle.tensor.layer_function_generator
import
templatedoc
from
paddle.tensor.layer_function_generator
import
templatedoc
...
@@ -754,7 +754,7 @@ class ClipGradByGlobalNorm(ClipGradBase):
...
@@ -754,7 +754,7 @@ class ClipGradByGlobalNorm(ClipGradBase):
global_norm_var
=
[]
global_norm_var
=
[]
if
len
(
sum_square_list_fp16
)
>
0
:
if
len
(
sum_square_list_fp16
)
>
0
:
global_norm_var_fp16
=
layers
.
sums
(
sum_square_list_fp16
)
global_norm_var_fp16
=
paddle
.
add_n
(
sum_square_list_fp16
)
if
(
if
(
sum_square_list_fp32
sum_square_list_fp32
or
sum_square_list
or
sum_square_list
...
@@ -766,7 +766,7 @@ class ClipGradByGlobalNorm(ClipGradBase):
...
@@ -766,7 +766,7 @@ class ClipGradByGlobalNorm(ClipGradBase):
else
:
else
:
global_norm_var
.
append
(
global_norm_var_fp16
)
global_norm_var
.
append
(
global_norm_var_fp16
)
if
len
(
sum_square_list_fp32
)
>
0
:
if
len
(
sum_square_list_fp32
)
>
0
:
global_norm_var_fp32
=
layers
.
sums
(
sum_square_list_fp32
)
global_norm_var_fp32
=
paddle
.
add_n
(
sum_square_list_fp32
)
if
sum_dtype
==
'float32'
:
if
sum_dtype
==
'float32'
:
global_norm_var
.
append
(
global_norm_var_fp32
)
global_norm_var
.
append
(
global_norm_var_fp32
)
else
:
else
:
...
@@ -775,11 +775,11 @@ class ClipGradByGlobalNorm(ClipGradBase):
...
@@ -775,11 +775,11 @@ class ClipGradByGlobalNorm(ClipGradBase):
)
)
if
len
(
sum_square_list
)
>
0
:
if
len
(
sum_square_list
)
>
0
:
# fp64
# fp64
global_norm_var_other_dtype
=
layers
.
sums
(
sum_square_list
)
global_norm_var_other_dtype
=
paddle
.
add_n
(
sum_square_list
)
global_norm_var
.
append
(
global_norm_var_other_dtype
)
global_norm_var
.
append
(
global_norm_var_other_dtype
)
global_norm_var
=
(
global_norm_var
=
(
layers
.
sums
(
global_norm_var
)
paddle
.
add_n
(
global_norm_var
)
if
len
(
global_norm_var
)
>
1
if
len
(
global_norm_var
)
>
1
else
global_norm_var
[
0
]
else
global_norm_var
[
0
]
)
)
...
@@ -863,7 +863,7 @@ class ClipGradByGlobalNorm(ClipGradBase):
...
@@ -863,7 +863,7 @@ class ClipGradByGlobalNorm(ClipGradBase):
def
_create_operators
(
self
,
param
,
grad
):
def
_create_operators
(
self
,
param
,
grad
):
group_scale_name
=
self
.
group_name
+
"_scale"
group_scale_name
=
self
.
group_name
+
"_scale"
if
group_scale_name
not
in
self
.
context
:
if
group_scale_name
not
in
self
.
context
:
group_norm_var
=
layers
.
sums
(
input
=
self
.
context
[
self
.
group_name
])
group_norm_var
=
paddle
.
add_n
(
self
.
context
[
self
.
group_name
])
group_norm_var
=
paddle
.
sqrt
(
x
=
group_norm_var
)
group_norm_var
=
paddle
.
sqrt
(
x
=
group_norm_var
)
clip_var
=
self
.
context
[
self
.
group_name
+
"_clip"
]
clip_var
=
self
.
context
[
self
.
group_name
+
"_clip"
]
group_scale_var
=
paddle
.
divide
(
group_scale_var
=
paddle
.
divide
(
...
...
python/paddle/nn/functional/loss.py
浏览文件 @
b85af464
...
@@ -3875,7 +3875,7 @@ def soft_margin_loss(input, label, reduction='mean', name=None):
...
@@ -3875,7 +3875,7 @@ def soft_margin_loss(input, label, reduction='mean', name=None):
if
not
(
input
.
shape
==
label
.
shape
):
if
not
(
input
.
shape
==
label
.
shape
):
raise
ValueError
(
"input's shape must equal to "
"label's shape"
)
raise
ValueError
(
"input's shape must equal to "
"label's shape"
)
label
=
fluid
.
layers
.
cast
(
label
,
input
.
dtype
)
label
=
paddle
.
cast
(
label
,
input
.
dtype
)
out
=
paddle
.
log
(
1
+
paddle
.
exp
(
-
label
*
input
))
out
=
paddle
.
log
(
1
+
paddle
.
exp
(
-
label
*
input
))
if
reduction
==
'sum'
:
if
reduction
==
'sum'
:
...
...
python/paddle/nn/layer/rnn.py
浏览文件 @
b85af464
...
@@ -323,7 +323,7 @@ def _rnn_static_graph(
...
@@ -323,7 +323,7 @@ def _rnn_static_graph(
with
paddle
.
fluid
.
framework
.
device_guard
(
"cpu"
):
with
paddle
.
fluid
.
framework
.
device_guard
(
"cpu"
):
new_cond
=
paddle
.
tensor
.
less_than
(
start_i
,
end
)
new_cond
=
paddle
.
tensor
.
less_than
(
start_i
,
end
)
paddle
.
fluid
.
layers
.
assign
(
new_cond
,
cond
)
paddle
.
assign
(
new_cond
,
cond
)
out
,
_
=
tensor_array_to_tensor
(
out_array
,
axis
=
0
,
use_stack
=
True
)
out
,
_
=
tensor_array_to_tensor
(
out_array
,
axis
=
0
,
use_stack
=
True
)
...
...
python/paddle/static/amp/decorator.py
浏览文件 @
b85af464
...
@@ -20,7 +20,6 @@ from paddle.fluid import (
...
@@ -20,7 +20,6 @@ from paddle.fluid import (
core
,
core
,
default_main_program
,
default_main_program
,
default_startup_program
,
default_startup_program
,
layers
,
program_guard
,
program_guard
,
unique_name
,
unique_name
,
)
)
...
@@ -460,7 +459,7 @@ class OptimizerWithMixedPrecision:
...
@@ -460,7 +459,7 @@ class OptimizerWithMixedPrecision:
if
self
.
_is_distributed
or
self
.
_use_pure_fp16
:
if
self
.
_is_distributed
or
self
.
_use_pure_fp16
:
with
self
.
_train_program
.
_optimized_guard
([]):
with
self
.
_train_program
.
_optimized_guard
([]):
all_infs
=
layers
.
concat
(
found_infs
)
all_infs
=
paddle
.
concat
(
found_infs
)
found_inf
=
paddle
.
any
(
all_infs
)
found_inf
=
paddle
.
any
(
all_infs
)
return
found_inf
return
found_inf
...
...
python/paddle/static/nn/control_flow.py
浏览文件 @
b85af464
...
@@ -28,7 +28,6 @@ from paddle.fluid.framework import Operator, Program, Variable
...
@@ -28,7 +28,6 @@ from paddle.fluid.framework import Operator, Program, Variable
# Temporary solution, it will be deleted later
# Temporary solution, it will be deleted later
from
paddle.fluid.layers.control_flow
import
ConditionalBlock
,
select_input
from
paddle.fluid.layers.control_flow
import
ConditionalBlock
,
select_input
from
paddle.fluid.layers.tensor
import
assign
,
cast
from
paddle.fluid.layers.utils
import
(
from
paddle.fluid.layers.utils
import
(
assert_same_structure
,
assert_same_structure
,
copy_mutable_vars
,
copy_mutable_vars
,
...
@@ -1117,7 +1116,7 @@ def cond(pred, true_fn=None, false_fn=None, name=None, return_names=None):
...
@@ -1117,7 +1116,7 @@ def cond(pred, true_fn=None, false_fn=None, name=None, return_names=None):
true_output
,
false_output
true_output
,
false_output
)
)
mask
=
cast
(
pred
,
dtype
=
'int32'
)
mask
=
paddle
.
cast
(
pred
,
dtype
=
'int32'
)
merge_func
=
(
merge_func
=
(
lambda
name
,
false_var
,
true_var
:
select_input_with_buildin_type
(
lambda
name
,
false_var
,
true_var
:
select_input_with_buildin_type
(
[
false_var
,
true_var
],
mask
,
name
[
false_var
,
true_var
],
mask
,
name
...
@@ -1158,7 +1157,7 @@ def copy_var_to_parent_block(var, layer_helper):
...
@@ -1158,7 +1157,7 @@ def copy_var_to_parent_block(var, layer_helper):
parent_block_var
=
parent_block
.
create_var
(
parent_block_var
=
parent_block
.
create_var
(
dtype
=
var
.
dtype
,
shape
=
var
.
shape
,
type
=
var
.
type
dtype
=
var
.
dtype
,
shape
=
var
.
shape
,
type
=
var
.
type
)
)
assign
(
var
,
parent_block_var
)
paddle
.
assign
(
var
,
parent_block_var
)
return
parent_block_var
return
parent_block_var
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录