Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
00f20313
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
00f20313
编写于
12月 12, 2022
作者:
K
kangguangli
提交者:
GitHub
12月 12, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
replace cross_entropy in python/paddle/fluid/tests/unittests/*/*.py except unittests/*.py (#48920)
上级
16e364d3
变更
51
显示空白变更内容
内联
并排
Showing
51 changed file
with
147 addition
and
63 deletion
+147
-63
python/paddle/fluid/tests/unittests/asp/asp_pruning_base.py
python/paddle/fluid/tests/unittests/asp/asp_pruning_base.py
+6
-1
python/paddle/fluid/tests/unittests/asp/test_asp_customized_pruning.py
.../fluid/tests/unittests/asp/test_asp_customized_pruning.py
+6
-1
python/paddle/fluid/tests/unittests/asp/test_asp_optimize_static.py
...dle/fluid/tests/unittests/asp/test_asp_optimize_static.py
+6
-1
python/paddle/fluid/tests/unittests/asp/test_asp_pruning_static.py
...ddle/fluid/tests/unittests/asp/test_asp_pruning_static.py
+6
-1
python/paddle/fluid/tests/unittests/asp/test_asp_save_load.py
...on/paddle/fluid/tests/unittests/asp/test_asp_save_load.py
+6
-1
python/paddle/fluid/tests/unittests/asp/test_fleet_with_asp_sharding.py
...fluid/tests/unittests/asp/test_fleet_with_asp_sharding.py
+6
-1
python/paddle/fluid/tests/unittests/asp/test_fleet_with_asp_static.py
...e/fluid/tests/unittests/asp/test_fleet_with_asp_static.py
+12
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/ifelse_simple_func.py
...d/tests/unittests/dygraph_to_static/ifelse_simple_func.py
+9
-3
python/paddle/fluid/tests/unittests/dygraph_to_static/test_mnist.py
...dle/fluid/tests/unittests/dygraph_to_static/test_mnist.py
+3
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_program_translator.py
...ts/unittests/dygraph_to_static/test_program_translator.py
+6
-2
python/paddle/fluid/tests/unittests/dygraph_to_static/test_resnet.py
...le/fluid/tests/unittests/dygraph_to_static/test_resnet.py
+6
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_resnet_amp.py
...luid/tests/unittests/dygraph_to_static/test_resnet_amp.py
+6
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_resnet_pure_fp16.py
...ests/unittests/dygraph_to_static/test_resnet_pure_fp16.py
+3
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_se_resnet.py
...fluid/tests/unittests/dygraph_to_static/test_se_resnet.py
+3
-1
python/paddle/fluid/tests/unittests/dygraph_to_static/test_sentiment.py
...fluid/tests/unittests/dygraph_to_static/test_sentiment.py
+12
-4
python/paddle/fluid/tests/unittests/ipu/test_dy2static_fp16_ipu.py
...ddle/fluid/tests/unittests/ipu/test_dy2static_fp16_ipu.py
+3
-1
python/paddle/fluid/tests/unittests/ipu/test_dy2static_ipu.py
...on/paddle/fluid/tests/unittests/ipu/test_dy2static_ipu.py
+3
-1
python/paddle/fluid/tests/unittests/ipu/test_modelruntime_ipu.py
...paddle/fluid/tests/unittests/ipu/test_modelruntime_ipu.py
+3
-1
python/paddle/fluid/tests/unittests/ipu/test_print_op_ipu.py
python/paddle/fluid/tests/unittests/ipu/test_print_op_ipu.py
+3
-1
python/paddle/fluid/tests/unittests/ir/test_ir_subgraph_python_interface.py
...d/tests/unittests/ir/test_ir_subgraph_python_interface.py
+3
-1
python/paddle/fluid/tests/unittests/mlu/test_adam_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_adam_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_adamw_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_adamw_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_elementwise_max_op_mlu.py
.../fluid/tests/unittests/mlu/test_elementwise_max_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_elementwise_min_op_mlu.py
.../fluid/tests/unittests/mlu/test_elementwise_min_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_gelu_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_gelu_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_leaky_relu_op_mlu.py
...addle/fluid/tests/unittests/mlu/test_leaky_relu_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_relu6_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_relu6_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_relu_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_relu_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/mlu/test_tanh_op_mlu.py
python/paddle/fluid/tests/unittests/mlu/test_tanh_op_mlu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_adam_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_adam_op_npu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_adamw_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_adamw_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_cos_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_cos_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_elementwise_div_op_npu.py
.../fluid/tests/unittests/npu/test_elementwise_div_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_elementwise_max_op_npu.py
.../fluid/tests/unittests/npu/test_elementwise_max_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_elementwise_min_op_npu.py
.../fluid/tests/unittests/npu/test_elementwise_min_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_elementwise_pow_op_npu.py
.../fluid/tests/unittests/npu/test_elementwise_pow_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_elementwise_sub_op_npu.py
.../fluid/tests/unittests/npu/test_elementwise_sub_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_gelu_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_gelu_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_leaky_relu_op_npu.py
...addle/fluid/tests/unittests/npu/test_leaky_relu_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_log_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_log_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_mul_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_mul_op_npu.py
+4
-4
python/paddle/fluid/tests/unittests/npu/test_pow_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_pow_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_reduce_sum_op_npu.py
...addle/fluid/tests/unittests/npu/test_reduce_sum_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_relu6_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_relu6_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_relu_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_relu_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_rmsprop_op_npu.py
...n/paddle/fluid/tests/unittests/npu/test_rmsprop_op_npu.py
+2
-2
python/paddle/fluid/tests/unittests/npu/test_sgd_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_sgd_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_softmax_op_npu.py
...n/paddle/fluid/tests/unittests/npu/test_softmax_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_sqrt_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_sqrt_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_square_op_npu.py
...on/paddle/fluid/tests/unittests/npu/test_square_op_npu.py
+1
-1
python/paddle/fluid/tests/unittests/npu/test_tanh_op_npu.py
python/paddle/fluid/tests/unittests/npu/test_tanh_op_npu.py
+1
-1
未找到文件。
python/paddle/fluid/tests/unittests/asp/asp_pruning_base.py
浏览文件 @
00f20313
...
...
@@ -60,7 +60,12 @@ class TestASPHelperPruningBase(unittest.TestCase):
def
run_training_pruning_test
(
self
,
get_mask_gen_func
,
get_mask_check_func
):
with
fluid
.
program_guard
(
self
.
main_program
,
self
.
startup_program
):
loss
=
paddle
.
mean
(
fluid
.
layers
.
cross_entropy
(
input
=
self
.
predict
,
label
=
self
.
label
)
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
self
.
predict
,
label
=
self
.
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
)
optimizer
=
paddle
.
incubate
.
asp
.
decorate
(
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
...
...
python/paddle/fluid/tests/unittests/asp/test_asp_customized_pruning.py
浏览文件 @
00f20313
...
...
@@ -269,7 +269,12 @@ class TestASPStaticCustomerizedPruneFunc(unittest.TestCase):
def
test_training_pruning
(
self
):
with
fluid
.
program_guard
(
self
.
main_program
,
self
.
startup_program
):
loss
=
paddle
.
mean
(
fluid
.
layers
.
cross_entropy
(
input
=
self
.
predict
,
label
=
self
.
label
)
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
self
.
predict
,
label
=
self
.
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
)
optimizer
=
sparsity
.
decorate
(
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
...
...
python/paddle/fluid/tests/unittests/asp/test_asp_optimize_static.py
浏览文件 @
00f20313
...
...
@@ -45,7 +45,12 @@ class TestASPStaticOptimize(unittest.TestCase):
with
fluid
.
program_guard
(
self
.
main_program
,
self
.
startup_program
):
self
.
img
,
self
.
label
,
predict
=
build_model
()
self
.
loss
=
paddle
.
mean
(
fluid
.
layers
.
cross_entropy
(
input
=
predict
,
label
=
self
.
label
)
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
predict
,
label
=
self
.
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
)
self
.
optimizer
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
...
...
python/paddle/fluid/tests/unittests/asp/test_asp_pruning_static.py
浏览文件 @
00f20313
...
...
@@ -65,7 +65,12 @@ class TestASPStaticPruningBase(unittest.TestCase):
def
test_training_pruning
(
self
):
with
fluid
.
program_guard
(
self
.
main_program
,
self
.
startup_program
):
loss
=
paddle
.
mean
(
fluid
.
layers
.
cross_entropy
(
input
=
self
.
predict
,
label
=
self
.
label
)
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
self
.
predict
,
label
=
self
.
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
)
optimizer
=
paddle
.
incubate
.
asp
.
decorate
(
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
...
...
python/paddle/fluid/tests/unittests/asp/test_asp_save_load.py
浏览文件 @
00f20313
...
...
@@ -146,7 +146,12 @@ class TestASPStaticOptimize(unittest.TestCase):
with
fluid
.
program_guard
(
self
.
main_program
,
self
.
startup_program
):
self
.
img
,
self
.
label
,
predict
=
build_model
()
self
.
loss
=
paddle
.
mean
(
fluid
.
layers
.
cross_entropy
(
input
=
predict
,
label
=
self
.
label
)
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
predict
,
label
=
self
.
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
)
self
.
optimizer
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
self
.
optimizer
=
paddle
.
incubate
.
asp
.
decorate
(
self
.
optimizer
)
...
...
python/paddle/fluid/tests/unittests/asp/test_fleet_with_asp_sharding.py
浏览文件 @
00f20313
...
...
@@ -60,7 +60,12 @@ class TestFleetWithASPSharding(unittest.TestCase):
fc_3
=
fluid
.
layers
.
fc
(
input
=
fc_2
,
size
=
64
,
act
=
'tanh'
)
fc_4
=
fluid
.
layers
.
fc
(
input
=
fc_3
,
size
=
64
,
act
=
'tanh'
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_4
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
input_y
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
input_y
,
reduction
=
'none'
,
use_softmax
=
False
,
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
dist_strategy
=
paddle
.
distributed
.
fleet
.
DistributedStrategy
()
...
...
python/paddle/fluid/tests/unittests/asp/test_fleet_with_asp_static.py
浏览文件 @
00f20313
...
...
@@ -49,7 +49,12 @@ class TestFleetWithASPStatic(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
input_x
,
size
=
64
,
act
=
'tanh'
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
input_y
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
input_y
,
reduction
=
'none'
,
use_softmax
=
False
,
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
strategy
=
paddle
.
distributed
.
fleet
.
DistributedStrategy
()
...
...
@@ -122,7 +127,12 @@ class TestFleetWithASPAMPStatic(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
input_x
,
size
=
64
,
act
=
'tanh'
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
input_y
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
input_y
,
reduction
=
'none'
,
use_softmax
=
False
,
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
strategy
=
paddle
.
distributed
.
fleet
.
DistributedStrategy
()
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/ifelse_simple_func.py
浏览文件 @
00f20313
...
...
@@ -22,7 +22,9 @@ def add_fn(x):
def
loss_fn
(
x
,
lable
):
loss
=
fluid
.
layers
.
cross_entropy
(
x
,
lable
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
x
,
lable
,
reduction
=
'none'
,
use_softmax
=
False
)
return
loss
...
...
@@ -45,7 +47,9 @@ def dyfunc_with_if_else(x_v, label=None):
x_v
=
x_v
+
1
# plain if in python
if
label
is
not
None
:
loss
=
fluid
.
layers
.
cross_entropy
(
x_v
,
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
x_v
,
label
,
reduction
=
'none'
,
use_softmax
=
False
)
return
loss
return
x_v
...
...
@@ -302,7 +306,9 @@ def if_with_and_or(x_v, label=None):
x_v
=
x_v
+
1
if
label
is
not
None
:
loss
=
fluid
.
layers
.
cross_entropy
(
x_v
,
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
x_v
,
label
,
reduction
=
'none'
,
use_softmax
=
False
)
return
loss
return
x_v
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_mnist.py
浏览文件 @
00f20313
...
...
@@ -107,7 +107,9 @@ class MNIST(fluid.dygraph.Layer):
x
=
self
.
inference
(
inputs
)
if
label
is
not
None
:
acc
=
paddle
.
static
.
accuracy
(
input
=
x
,
label
=
label
)
loss
=
fluid
.
layers
.
cross_entropy
(
x
,
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
x
,
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_loss
=
paddle
.
mean
(
loss
)
return
x
,
acc
,
avg_loss
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_program_translator.py
浏览文件 @
00f20313
...
...
@@ -109,7 +109,9 @@ class StaticCode1:
def
true_fn_1
():
nonlocal
__return_0
,
__return_1
,
__return_value_0
,
loss
loss
=
fluid
.
layers
.
cross_entropy
(
x_v
,
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
x_v
,
label
,
reduction
=
'none'
,
use_softmax
=
False
)
__return_0
=
_jst
.
create_bool_as_type
(
label
is
not
None
,
True
)
__return_value_0
=
loss
return
...
...
@@ -178,7 +180,9 @@ class StaticCode2:
def
true_fn_3
():
nonlocal
__return_2
,
__return_3
,
__return_value_1
,
loss
loss
=
fluid
.
layers
.
cross_entropy
(
x_v
,
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
x_v
,
label
,
reduction
=
'none'
,
use_softmax
=
False
)
__return_2
=
_jst
.
create_bool_as_type
(
label
is
not
None
,
True
)
__return_value_1
=
loss
return
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_resnet.py
浏览文件 @
00f20313
...
...
@@ -272,7 +272,12 @@ class ResNetHelper:
img
,
label
=
data
pred
=
resnet
(
img
)
loss
=
fluid
.
layers
.
cross_entropy
(
input
=
pred
,
label
=
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
pred
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
avg_loss
=
paddle
.
mean
(
x
=
loss
)
acc_top1
=
paddle
.
static
.
accuracy
(
input
=
pred
,
label
=
label
,
k
=
1
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_resnet_amp.py
浏览文件 @
00f20313
...
...
@@ -74,7 +74,12 @@ def train(to_static, build_strategy=None):
# FIXME(Aurelius84): The followding cross_entropy seems to bring out a
# precision problem, need to figure out the underlying reason.
# If we remove it, the loss between dygraph and dy2stat is exactly same.
loss
=
fluid
.
layers
.
cross_entropy
(
input
=
pred
,
label
=
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
pred
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
,
)
avg_loss
=
paddle
.
mean
(
x
=
pred
)
acc_top1
=
paddle
.
static
.
accuracy
(
input
=
pred
,
label
=
label
,
k
=
1
)
acc_top5
=
paddle
.
static
.
accuracy
(
input
=
pred
,
label
=
label
,
k
=
5
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_resnet_pure_fp16.py
浏览文件 @
00f20313
...
...
@@ -75,7 +75,9 @@ def train(to_static, build_strategy=None):
level
=
'O2'
,
):
pred
=
resnet
(
img
)
loss
=
fluid
.
layers
.
cross_entropy
(
input
=
pred
,
label
=
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
pred
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_loss
=
paddle
.
mean
(
x
=
pred
)
acc_top1
=
paddle
.
static
.
accuracy
(
input
=
pred
,
label
=
label
,
k
=
1
)
acc_top5
=
paddle
.
static
.
accuracy
(
input
=
pred
,
label
=
label
,
k
=
5
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_se_resnet.py
浏览文件 @
00f20313
...
...
@@ -340,7 +340,9 @@ class SeResNeXt(fluid.dygraph.Layer):
out
=
self
.
out
(
y
)
softmax_out
=
paddle
.
nn
.
functional
.
softmax
(
out
)
loss
=
fluid
.
layers
.
cross_entropy
(
input
=
softmax_out
,
label
=
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
softmax_out
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_loss
=
paddle
.
mean
(
x
=
loss
)
acc_top1
=
paddle
.
static
.
accuracy
(
input
=
softmax_out
,
label
=
label
,
k
=
1
)
...
...
python/paddle/fluid/tests/unittests/dygraph_to_static/test_sentiment.py
浏览文件 @
00f20313
...
...
@@ -106,7 +106,9 @@ class CNN(fluid.dygraph.Layer):
prediction
=
self
.
_fc_prediction
(
fc_1
)
prediction
=
self
.
_fc1_act
(
prediction
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
acc
=
paddle
.
static
.
accuracy
(
input
=
prediction
,
label
=
label
)
return
avg_cost
,
prediction
,
acc
...
...
@@ -149,7 +151,9 @@ class BOW(fluid.dygraph.Layer):
prediction
=
self
.
_fc_prediction
(
fc_2
)
prediction
=
paddle
.
nn
.
functional
.
softmax
(
prediction
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
acc
=
paddle
.
static
.
accuracy
(
input
=
prediction
,
label
=
label
)
return
avg_cost
,
prediction
,
acc
...
...
@@ -195,7 +199,9 @@ class GRU(fluid.dygraph.Layer):
fc_2
=
paddle
.
tanh
(
fc_2
)
prediction
=
self
.
_fc_prediction
(
fc_2
)
prediction
=
paddle
.
nn
.
functional
.
softmax
(
prediction
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
acc
=
paddle
.
static
.
accuracy
(
input
=
prediction
,
label
=
label
)
return
avg_cost
,
prediction
,
acc
...
...
@@ -254,7 +260,9 @@ class BiGRU(fluid.dygraph.Layer):
prediction
=
paddle
.
nn
.
functional
.
softmax
(
prediction
)
# TODO(Aurelius84): Uncomment the following codes when we support return variable-length vars.
# if label is not None:
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
avg_cost
=
paddle
.
mean
(
x
=
cost
)
acc
=
paddle
.
static
.
accuracy
(
input
=
prediction
,
label
=
label
)
return
avg_cost
,
prediction
,
acc
...
...
python/paddle/fluid/tests/unittests/ipu/test_dy2static_fp16_ipu.py
浏览文件 @
00f20313
...
...
@@ -34,7 +34,9 @@ class SimpleLayer(paddle.nn.Layer):
x
=
paddle
.
flatten
(
x
,
1
,
-
1
)
if
target
is
not
None
:
x
=
paddle
.
nn
.
functional
.
softmax
(
x
)
loss
=
paddle
.
fluid
.
layers
.
cross_entropy
(
x
,
target
)
loss
=
paddle
.
paddle
.
nn
.
functional
.
cross_entropy
(
x
,
target
,
reduction
=
'none'
,
use_softmax
=
False
)
if
self
.
use_ipu
:
loss
=
paddle
.
incubate
.
identity_loss
(
loss
,
1
)
else
:
...
...
python/paddle/fluid/tests/unittests/ipu/test_dy2static_ipu.py
浏览文件 @
00f20313
...
...
@@ -52,7 +52,9 @@ class SimpleLayer(paddle.nn.Layer):
if
self
.
loss_op
:
loss
=
self
.
loss_op
(
x
,
target
)
else
:
loss
=
paddle
.
fluid
.
layers
.
cross_entropy
(
x
,
target
)
loss
=
paddle
.
paddle
.
nn
.
functional
.
cross_entropy
(
x
,
target
,
reduction
=
'none'
,
use_softmax
=
False
)
if
self
.
use_reduction
:
loss
=
paddle
.
mean
(
loss
)
if
self
.
use_identity_loss
:
...
...
python/paddle/fluid/tests/unittests/ipu/test_modelruntime_ipu.py
浏览文件 @
00f20313
...
...
@@ -33,7 +33,9 @@ class SimpleLayer(paddle.nn.Layer):
x
=
paddle
.
flatten
(
x
,
1
,
-
1
)
if
target
is
not
None
:
x
=
paddle
.
nn
.
functional
.
softmax
(
x
)
loss
=
paddle
.
fluid
.
layers
.
cross_entropy
(
x
,
target
)
loss
=
paddle
.
paddle
.
nn
.
functional
.
cross_entropy
(
x
,
target
,
reduction
=
'none'
,
use_softmax
=
False
)
return
x
,
loss
return
x
...
...
python/paddle/fluid/tests/unittests/ipu/test_print_op_ipu.py
浏览文件 @
00f20313
...
...
@@ -120,7 +120,9 @@ class SimpleLayer(paddle.nn.Layer):
x
=
paddle
.
flatten
(
x
,
1
,
-
1
)
if
target
is
not
None
:
x
=
paddle
.
nn
.
functional
.
softmax
(
x
)
loss
=
paddle
.
fluid
.
layers
.
cross_entropy
(
x
,
target
)
loss
=
paddle
.
paddle
.
nn
.
functional
.
cross_entropy
(
x
,
target
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
incubate
.
identity_loss
(
loss
,
1
)
return
x
,
loss
return
x
...
...
python/paddle/fluid/tests/unittests/ir/test_ir_subgraph_python_interface.py
浏览文件 @
00f20313
...
...
@@ -35,7 +35,9 @@ class TestQuantizationSubGraph(unittest.TestCase):
hidden
=
data
for
_
in
range
(
num
):
hidden
=
fluid
.
layers
.
fc
(
hidden
,
size
=
128
,
act
=
'relu'
)
loss
=
fluid
.
layers
.
cross_entropy
(
input
=
hidden
,
label
=
label
)
loss
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
hidden
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
loss
)
return
loss
...
...
python/paddle/fluid/tests/unittests/mlu/test_adam_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -263,7 +263,7 @@ class TestNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.01
)
adam
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_adamw_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -214,7 +214,7 @@ class TestNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
adam
=
paddle
.
optimizer
.
AdamW
(
learning_rate
=
0.01
,
weight_decay
=
0.02
)
adam
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_elementwise_max_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -343,7 +343,7 @@ class TestElementwiseMaxNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
c
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_elementwise_min_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -189,7 +189,7 @@ class TestElementwiseMinOpNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
c
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_gelu_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -112,7 +112,7 @@ class TestGeluNet(unittest.TestCase):
fc_1_gelu
=
paddle
.
nn
.
functional
.
gelu
(
fc_1
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1_gelu
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_leaky_relu_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -106,7 +106,7 @@ class TestLeakyReluNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
y
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_relu6_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -125,7 +125,7 @@ class TestRelu6Net(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_relu_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -126,7 +126,7 @@ class TestReluNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/mlu/test_tanh_op_mlu.py
浏览文件 @
00f20313
...
...
@@ -107,7 +107,7 @@ class TestTanhNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
d
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_adam_op_npu.py
浏览文件 @
00f20313
...
...
@@ -263,7 +263,7 @@ class TestNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
adam
=
fluid
.
optimizer
.
Adam
(
learning_rate
=
0.01
)
adam
.
minimize
(
loss
)
...
...
@@ -348,7 +348,7 @@ class TestNetWithEpsilonTensor(unittest.TestCase):
input
=
fc_1
,
size
=
2
,
param_attr
=
weight_attr2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
beta1_init
=
0.9
beta2_init
=
0.999
...
...
python/paddle/fluid/tests/unittests/npu/test_adamw_op_npu.py
浏览文件 @
00f20313
...
...
@@ -214,7 +214,7 @@ class TestNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
adam
=
paddle
.
optimizer
.
AdamW
(
learning_rate
=
0.01
,
weight_decay
=
0.02
)
adam
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_cos_op_npu.py
浏览文件 @
00f20313
...
...
@@ -104,7 +104,7 @@ class TestCosNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
d
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_elementwise_div_op_npu.py
浏览文件 @
00f20313
...
...
@@ -138,7 +138,7 @@ class TestElementwiseDivNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
g
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_elementwise_max_op_npu.py
浏览文件 @
00f20313
...
...
@@ -302,7 +302,7 @@ class TestElementwiseMaxNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
c
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_elementwise_min_op_npu.py
浏览文件 @
00f20313
...
...
@@ -189,7 +189,7 @@ class TestElementwiseMinOpNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
c
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_elementwise_pow_op_npu.py
浏览文件 @
00f20313
...
...
@@ -313,7 +313,7 @@ class TestElementwisePowNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
c
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_elementwise_sub_op_npu.py
浏览文件 @
00f20313
...
...
@@ -194,7 +194,7 @@ class TestSubtractNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_gelu_op_npu.py
浏览文件 @
00f20313
...
...
@@ -112,7 +112,7 @@ class TestGeluNet(unittest.TestCase):
fc_1_gelu
=
paddle
.
nn
.
functional
.
gelu
(
fc_1
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1_gelu
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_leaky_relu_op_npu.py
浏览文件 @
00f20313
...
...
@@ -106,7 +106,7 @@ class TestLeakyReluNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
y
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_log_op_npu.py
浏览文件 @
00f20313
...
...
@@ -104,7 +104,7 @@ class TestLogNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
d
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_mul_op_npu.py
浏览文件 @
00f20313
...
...
@@ -247,7 +247,7 @@ class TestMulNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
result
,
size
=
8
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
@@ -324,7 +324,7 @@ class TestMulNet3_2(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
result
,
size
=
8
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
@@ -404,7 +404,7 @@ class TestMulNet3_2_xc2(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
result_re
,
size
=
8
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
@@ -485,7 +485,7 @@ class TestMulNet4_2(unittest.TestCase):
prediction
=
fluid
.
layers
.
fc
(
input
=
result
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_pow_op_npu.py
浏览文件 @
00f20313
...
...
@@ -104,7 +104,7 @@ class TestPowNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_reduce_sum_op_npu.py
浏览文件 @
00f20313
...
...
@@ -112,7 +112,7 @@ class TestReduceSumNet(unittest.TestCase):
prediction
=
fluid
.
layers
.
fc
(
input
=
z_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_relu6_op_npu.py
浏览文件 @
00f20313
...
...
@@ -125,7 +125,7 @@ class TestRelu6Net(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_relu_op_npu.py
浏览文件 @
00f20313
...
...
@@ -118,7 +118,7 @@ class TestReluNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_rmsprop_op_npu.py
浏览文件 @
00f20313
...
...
@@ -52,7 +52,7 @@ class TestNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
rmsprop
=
fluid
.
optimizer
.
RMSProp
(
learning_rate
=
0.01
)
rmsprop
.
minimize
(
loss
)
...
...
@@ -115,7 +115,7 @@ class TestCenteredNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
rmsprop
=
fluid
.
optimizer
.
RMSProp
(
learning_rate
=
0.01
,
centered
=
True
)
rmsprop
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_sgd_op_npu.py
浏览文件 @
00f20313
...
...
@@ -77,7 +77,7 @@ class TestNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
z
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_softmax_op_npu.py
浏览文件 @
00f20313
...
...
@@ -81,7 +81,7 @@ class TestSoftmaxNet(unittest.TestCase):
# 4 x 2
prob
=
paddle
.
nn
.
functional
.
softmax
(
prediction
,
axis
=
1
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prob
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prob
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_sqrt_op_npu.py
浏览文件 @
00f20313
...
...
@@ -107,7 +107,7 @@ class TestSqrtNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
d
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_square_op_npu.py
浏览文件 @
00f20313
...
...
@@ -104,7 +104,7 @@ class TestSquareNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
d
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
python/paddle/fluid/tests/unittests/npu/test_tanh_op_npu.py
浏览文件 @
00f20313
...
...
@@ -107,7 +107,7 @@ class TestTanhNet(unittest.TestCase):
fc_1
=
fluid
.
layers
.
fc
(
input
=
d
,
size
=
128
)
prediction
=
fluid
.
layers
.
fc
(
input
=
fc_1
,
size
=
2
,
act
=
'softmax'
)
cost
=
fluid
.
layers
.
cross_entropy
(
input
=
prediction
,
label
=
label
)
cost
=
paddle
.
nn
.
functional
.
cross_entropy
(
input
=
prediction
,
label
=
label
,
reduction
=
'none'
,
use_softmax
=
False
)
loss
=
paddle
.
mean
(
cost
)
sgd
=
fluid
.
optimizer
.
SGD
(
learning_rate
=
0.01
)
sgd
.
minimize
(
loss
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录