Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
book
提交
ee003d54
B
book
项目概览
PaddlePaddle
/
book
通知
16
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
40
列表
看板
标记
里程碑
合并请求
37
Wiki
5
Wiki
分析
仓库
DevOps
项目成员
Pages
B
book
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
40
Issue
40
列表
看板
标记
里程碑
合并请求
37
合并请求
37
Pages
分析
分析
仓库分析
DevOps
Wiki
5
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
ee003d54
编写于
3月 06, 2017
作者:
L
Luo Tao
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix image description
上级
6a85d1f7
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
6 addition
and
6 deletion
+6
-6
recognize_digits/README.en.md
recognize_digits/README.en.md
+2
-2
recognize_digits/index.en.html
recognize_digits/index.en.html
+2
-2
recognize_digits/index.html
recognize_digits/index.html
+2
-2
未找到文件。
recognize_digits/README.en.md
浏览文件 @
ee003d54
...
@@ -42,7 +42,7 @@ In such a classification problem, we usually use the cross entropy loss function
...
@@ -42,7 +42,7 @@ In such a classification problem, we usually use the cross entropy loss function
$$ crossentropy(label, y) = -
\s
um_i label_ilog(y_i) $$
$$ crossentropy(label, y) = -
\s
um_i label_ilog(y_i) $$
Fig. 2 shows a softmax regression network, with weights in bl
ack
, and bias in red. +1 indicates bias is 1.
Fig. 2 shows a softmax regression network, with weights in bl
ue
, and bias in red. +1 indicates bias is 1.
<p
align=
"center"
>
<p
align=
"center"
>
<img
src=
"image/softmax_regression_en.png"
width=
400
><br/>
<img
src=
"image/softmax_regression_en.png"
width=
400
><br/>
...
@@ -57,7 +57,7 @@ The Softmax regression model described above uses the simplest two-layer neural
...
@@ -57,7 +57,7 @@ The Softmax regression model described above uses the simplest two-layer neural
2.
After the second hidden layer, we get $ H_2 =
\p
hi(W_2H_1 + b_2) $.
2.
After the second hidden layer, we get $ H_2 =
\p
hi(W_2H_1 + b_2) $.
3.
Finally, after output layer, we get $Y=softmax(W_3H_2 + b_3)$, the final classification result vector.
3.
Finally, after output layer, we get $Y=softmax(W_3H_2 + b_3)$, the final classification result vector.
Fig. 3. is Multilayer Perceptron network, with weights in bl
ack
, and bias in red. +1 indicates bias is 1.
Fig. 3. is Multilayer Perceptron network, with weights in bl
ue
, and bias in red. +1 indicates bias is 1.
<p
align=
"center"
>
<p
align=
"center"
>
<img
src=
"image/mlp_en.png"
width=
500
><br/>
<img
src=
"image/mlp_en.png"
width=
500
><br/>
...
...
recognize_digits/index.en.html
浏览文件 @
ee003d54
...
@@ -83,7 +83,7 @@ In such a classification problem, we usually use the cross entropy loss function
...
@@ -83,7 +83,7 @@ In such a classification problem, we usually use the cross entropy loss function
$$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$
$$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$
Fig. 2 shows a softmax regression network, with weights in bl
ack
, and bias in red. +1 indicates bias is 1.
Fig. 2 shows a softmax regression network, with weights in bl
ue
, and bias in red. +1 indicates bias is 1.
<p
align=
"center"
>
<p
align=
"center"
>
<img
src=
"image/softmax_regression_en.png"
width=
400
><br/>
<img
src=
"image/softmax_regression_en.png"
width=
400
><br/>
...
@@ -98,7 +98,7 @@ The Softmax regression model described above uses the simplest two-layer neural
...
@@ -98,7 +98,7 @@ The Softmax regression model described above uses the simplest two-layer neural
2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $.
2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $.
3. Finally, after output layer, we get $Y=softmax(W_3H_2 + b_3)$, the final classification result vector.
3. Finally, after output layer, we get $Y=softmax(W_3H_2 + b_3)$, the final classification result vector.
Fig. 3. is Multilayer Perceptron network, with weights in bl
ack
, and bias in red. +1 indicates bias is 1.
Fig. 3. is Multilayer Perceptron network, with weights in bl
ue
, and bias in red. +1 indicates bias is 1.
<p
align=
"center"
>
<p
align=
"center"
>
<img
src=
"image/mlp_en.png"
width=
500
><br/>
<img
src=
"image/mlp_en.png"
width=
500
><br/>
...
...
recognize_digits/index.html
浏览文件 @
ee003d54
...
@@ -83,7 +83,7 @@ $$ y_i = softmax(\sum_j W_{i,j}x_j + b_i) $$
...
@@ -83,7 +83,7 @@ $$ y_i = softmax(\sum_j W_{i,j}x_j + b_i) $$
$$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$
$$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$
图2为softmax回归的网络图,图中权重用
黑
线表示、偏置用红线表示、+1代表偏置参数的系数为1。
图2为softmax回归的网络图,图中权重用
蓝
线表示、偏置用红线表示、+1代表偏置参数的系数为1。
<p
align=
"center"
>
<p
align=
"center"
>
<img
src=
"image/softmax_regression.png"
width=
400
><br/>
<img
src=
"image/softmax_regression.png"
width=
400
><br/>
...
@@ -99,7 +99,7 @@ Softmax回归模型采用了最简单的两层神经网络,即只有输入层
...
@@ -99,7 +99,7 @@ Softmax回归模型采用了最简单的两层神经网络,即只有输入层
3. 最后,再经过输出层,得到的$Y=softmax(W_3H_2 + b_3)$,即为最后的分类结果向量。
3. 最后,再经过输出层,得到的$Y=softmax(W_3H_2 + b_3)$,即为最后的分类结果向量。
图3为多层感知器的网络结构图,图中权重用
黑
线表示、偏置用红线表示、+1代表偏置参数的系数为1。
图3为多层感知器的网络结构图,图中权重用
蓝
线表示、偏置用红线表示、+1代表偏置参数的系数为1。
<p
align=
"center"
>
<p
align=
"center"
>
<img
src=
"image/mlp.png"
width=
500
><br/>
<img
src=
"image/mlp.png"
width=
500
><br/>
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录