From ee003d54ee46ba2e57fd3dd6e9a810242c8da2a9 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 6 Mar 2017 15:57:44 +0800 Subject: [PATCH] fix image description --- recognize_digits/README.en.md | 4 ++-- recognize_digits/index.en.html | 4 ++-- recognize_digits/index.html | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/recognize_digits/README.en.md b/recognize_digits/README.en.md index 3357963..3c82ab1 100644 --- a/recognize_digits/README.en.md +++ b/recognize_digits/README.en.md @@ -42,7 +42,7 @@ In such a classification problem, we usually use the cross entropy loss function $$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$ -Fig. 2 shows a softmax regression network, with weights in black, and bias in red. +1 indicates bias is 1. +Fig. 2 shows a softmax regression network, with weights in blue, and bias in red. +1 indicates bias is 1.


@@ -57,7 +57,7 @@ The Softmax regression model described above uses the simplest two-layer neural 2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $. 3. Finally, after output layer, we get $Y=softmax(W_3H_2 + b_3)$, the final classification result vector. -Fig. 3. is Multilayer Perceptron network, with weights in black, and bias in red. +1 indicates bias is 1. +Fig. 3. is Multilayer Perceptron network, with weights in blue, and bias in red. +1 indicates bias is 1.


diff --git a/recognize_digits/index.en.html b/recognize_digits/index.en.html index f918bad..bec542c 100644 --- a/recognize_digits/index.en.html +++ b/recognize_digits/index.en.html @@ -83,7 +83,7 @@ In such a classification problem, we usually use the cross entropy loss function $$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$ -Fig. 2 shows a softmax regression network, with weights in black, and bias in red. +1 indicates bias is 1. +Fig. 2 shows a softmax regression network, with weights in blue, and bias in red. +1 indicates bias is 1.


@@ -98,7 +98,7 @@ The Softmax regression model described above uses the simplest two-layer neural 2. After the second hidden layer, we get $ H_2 = \phi(W_2H_1 + b_2) $. 3. Finally, after output layer, we get $Y=softmax(W_3H_2 + b_3)$, the final classification result vector. -Fig. 3. is Multilayer Perceptron network, with weights in black, and bias in red. +1 indicates bias is 1. +Fig. 3. is Multilayer Perceptron network, with weights in blue, and bias in red. +1 indicates bias is 1.


diff --git a/recognize_digits/index.html b/recognize_digits/index.html index 81b42c1..d34d1a7 100644 --- a/recognize_digits/index.html +++ b/recognize_digits/index.html @@ -83,7 +83,7 @@ $$ y_i = softmax(\sum_j W_{i,j}x_j + b_i) $$ $$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$ -图2为softmax回归的网络图,图中权重用黑线表示、偏置用红线表示、+1代表偏置参数的系数为1。 +图2为softmax回归的网络图,图中权重用蓝线表示、偏置用红线表示、+1代表偏置参数的系数为1。


@@ -99,7 +99,7 @@ Softmax回归模型采用了最简单的两层神经网络,即只有输入层 3. 最后,再经过输出层,得到的$Y=softmax(W_3H_2 + b_3)$,即为最后的分类结果向量。 -图3为多层感知器的网络结构图,图中权重用黑线表示、偏置用红线表示、+1代表偏置参数的系数为1。 +图3为多层感知器的网络结构图,图中权重用蓝线表示、偏置用红线表示、+1代表偏置参数的系数为1。


-- GitLab