未验证 提交 8e5e42f5 编写于 作者: S SunAhong1993 提交者: GitHub

Update SofmaxWithLoss.md

上级 19d334ef
...@@ -6,10 +6,10 @@ ...@@ -6,10 +6,10 @@
layer { layer {
name: "loss" name: "loss"
type: "SoftmaxWithLoss" type: "SoftmaxWithLoss"
bottom: "pred" bottom: "logits"
bottom: "label" bottom: "label"
top: "loss" top: "loss"
loss_param{ loss_param {
ignore_label: -1 ignore_label: -1
normalize: 0 normalize: 0
normalization: FULL normalization: FULL
...@@ -23,10 +23,10 @@ layer { ...@@ -23,10 +23,10 @@ layer {
paddle.fluid.layers.softmax_with_cross_entropy( paddle.fluid.layers.softmax_with_cross_entropy(
logits, logits,
label, label,
soft_label = False, soft_label=False,
ignore_index = -100, ignore_index=-100,
numeric_stable_mode = False, numeric_stable_mode=False,
return_softmax = False return_softmax=False
) )
``` ```
...@@ -52,16 +52,16 @@ PaddlePaddle:输出是每个样本的loss所组成的一个向量,同时如 ...@@ -52,16 +52,16 @@ PaddlePaddle:输出是每个样本的loss所组成的一个向量,同时如
### 代码示例 ### 代码示例
``` ```
# Caffe示例: # Caffe示例:
# pred输入shape:(100,10) # logits输入shape:(100,10)
# label输入shape:(100,1) # label输入shape:(100,1)
# 输出shape:() # 输出shape:()
layer { layer {
name: "loss" name: "loss"
type: "SoftmaxWithLoss" type: "SoftmaxWithLoss"
bottom: "pred" bottom: "logits"
bottom: "label" bottom: "label"
top: "loss" top: "loss"
loss_param{ loss_param {
ignore_label: -1 ignore_label: -1
normalize: 0 normalize: 0
normalization: FULL normalization: FULL
...@@ -73,10 +73,10 @@ layer { ...@@ -73,10 +73,10 @@ layer {
```python ```python
# PaddlePaddle示例: # PaddlePaddle示例:
# pred输入shape:(100,10) # logits输入shape:(100,10)
# label输入shape:(100,1) # label输入shape:(100,1)
# 输出shape:(10,1) # 输出shape:(10,1)
softmaxwithloss= fluid.layers.softmax_with_cross_entropy(logits = logs, label = labels, softmaxwithloss = fluid.layers.softmax_with_cross_entropy(logits=logs, label=labels,
soft_label=False, ignore_index=-100, soft_label=False, ignore_index=-100,
numeric_stable_mode=False, numeric_stable_mode=False,
return_softmax=False) return_softmax=False)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册