提交 847c1e66 编写于 作者: Y Yi Wang

Use English figures in recognize_digits/README.en.md

上级 509916f0
......@@ -45,14 +45,8 @@ $$ crossentropy(label, y) = -\sum_i label_ilog(y_i) $$
Fig. 2 shows a softmax regression network, with weights in black, and bias in red. +1 indicates bias is 1.
<p align="center">
<img src="image/softmax_regression.png" width=400><br/>
<img src="image/softmax_regression_en.png" width=400><br/>
Fig. 2. Softmax regression network architecture<br/>
输入层 -> input layer<br/>
权重W -> weights W<br/>
激活前 -> before activation<br/>
激活函数 -> activation function<br/>
输出层 -> output layer<br/>
偏置b -> bias b<br/>
</p>
### Multilayer Perceptron
......@@ -66,12 +60,9 @@ The Softmax regression model described above uses the simplest two-layer neural
Fig. 3. is Multilayer Perceptron network, with weights in black, and bias in red. +1 indicates bias is 1.
<p align="center">
<img src="image/mlp.png" width=500><br/>
<img src="image/mlp_en.png" width=500><br/>
Fig. 3. Multilayer Perceptron network architecture<br/>
输入层X -> input layer X<br/>
隐藏层$H_1$(含激活函数) -> hidden layer $H_1$ (including activation function)<br/>
隐藏层$H_2$(含激活函数) -> hidden layer $H_2$ (including activation function)<br/>
输出层Y -> output layer Y<br/>
</p>
### Convolutional Neural Network
......@@ -79,10 +70,8 @@ Fig. 3. Multilayer Perceptron network architecture<br/>
#### Convolutional Layer
<p align="center">
<img src="image/conv_layer.png" width=500><br/>
<img src="image/conv_layer_en.png" width=500><br/>
Fig. 4. Convolutional layer<br/>
输入数据 -> input data<br/>
卷积输出 -> convolution output<br/>
</p>
The Convolutional layer is the core of a Convolutional Neural Network. The parameters in this layer are composed of a set of filters or kernels. In the forward step, each kernel moves horizontally and vertically, we compute a dot product of the kernel and the input at the corresponding positions, to this result we add bias and apply an activation function. The result is a two-dimensional activation map. For example, some kernel may recognize corners, and some may recognize circles. These convolution kernels may respond strongly to the corresponding features.
......@@ -92,9 +81,8 @@ Fig. 4 is a dynamic graph of a convolutional layer, where depths are not shown f
#### Pooling Layer
<p align="center">
<img src="image/max_pooling.png" width="400px"><br/>
<img src="image/max_pooling_en.png" width="400px"><br/>
Fig. 5 Pooling layer<br/>
输入数据 -> input data<br/>
</p>
A Pooling layer performs downsampling. The main functionality of this layer is to reduce computation by reducing the network parameters. It also prevents overfitting to some extent. Usually, a pooling layer is added after a convolutional layer. Pooling layer can be of various types like max pooling, average pooling, etc. Max pooling uses rectangles to segment the input layer into several parts and computes the maximum value in each part as the output (Fig. 5.)
......@@ -102,13 +90,8 @@ A Pooling layer performs downsampling. The main functionality of this layer is t
#### LeNet-5 Network
<p align="center">
<img src="image/cnn.png"><br/>
<img src="image/cnn_en.png"><br/>
Fig. 6. LeNet-5 Convolutional Neural Network architecture<br/>
特征图 -> feature map<br/>
卷积层 -> convolutional layer<br/>
降采样层 -> downsampling layer<br/>
全连接层 -> fully connected layer<br/>
输出层(全连接+Softmax激活) -> output layer (fully connected + softmax activation)<br/>
</p>
[LeNet-5](http://yann.lecun.com/exdb/lenet/) is one of the simplest Convolutional Neural Networks. Fig. 6. shows its architecture: A 2-dimensional input image is fed into two sets of convolutional layers and pooling layers, this output is then fed to a fully connected layer and a softmax classifier. The following three properties of convolution enable LeNet-5 to better recognize images than Multilayer fully connected perceptrons:
......@@ -355,12 +338,8 @@ python evaluate.py softmax_train.log
### Training Results for Softmax Regression
<p align="center">
<img src="image/softmax_train_log.png" width="400px"><br/>
<img src="image/softmax_train_log_en.png" width="400px"><br/>
Fig. 7 Softmax regression error curve<br/>
训练集 -> training set<br/>
测试集 -> test set<br/>
平均代价 -> average cost<br/>
训练轮数 -> epoch<br/>
</p>
Evaluation results of the models:
......@@ -375,12 +354,8 @@ From the evaluation results, the best pass for softmax regression model is pass-
### Results of Multilayer Perceptron
<p align="center">
<img src="image/mlp_train_log.png" width="400px"><br/>
<img src="image/mlp_train_log_en.png" width="400px"><br/>
Fig. 8. Multilayer Perceptron error curve<br/>
训练集 -> training set<br/>
测试集 -> test set<br/>
平均代价 -> average cost<br/>
训练轮数 -> epoch<br/>
</p>
Evaluation results of the models:
......@@ -395,12 +370,8 @@ From the evaluation results, the final training accuracy is 94.95%. It is signif
### Training results for Convolutional Neural Network
<p align="center">
<img src="image/cnn_train_log.png" width="400px"><br/>
<img src="image/cnn_train_log_en.png" width="400px"><br/>
Fig. 9. Convolutional Neural Network error curve<br/>
训练集 -> training set<br/>
测试集 -> test set<br/>
平均代价 -> average cost<br/>
训练轮数 -> epoch<br/>
</p>
Results of model evaluation:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册