diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 8ff36e098ba9ea25faec99ef2bf5ced768483975..227315454a5be541c1a3134558d05539b7648ec6 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,10 +1,11 @@
+repos:
- repo: https://github.com/pre-commit/mirrors-yapf.git
- sha: v0.16.0
+ sha: v0.18.0
hooks:
- id: yapf
files: \.py$
- repo: https://github.com/pre-commit/pre-commit-hooks
- sha: a11d9314b22d8f8c7556443875b731ef05965464
+ sha: v0.9.4
hooks:
- id: check-merge-conflict
- id: check-symlinks
@@ -15,7 +16,7 @@
- id: trailing-whitespace
files: \.md$
- repo: https://github.com/Lucas-C/pre-commit-hooks
- sha: v1.0.1
+ sha: v1.1.4
hooks:
- id: forbid-crlf
files: \.md$
diff --git a/ssd/README.md b/ssd/README.md
index 9be7b34f99bc6973e9fe623311e9b62d648ebf29..3adb904d640bc986399f7845cdaf2cb684be956d 100644
--- a/ssd/README.md
+++ b/ssd/README.md
@@ -1,23 +1,23 @@
# Single Shot MultiBox Detector (SSD) Object Detection
## Introduction
-Single Shot MultiBox Detector (SSD) is one of the new and enhanced detection algorithms detecting objects in images [ 1 ]. SSD algorithm is characterized by rapid detection and high detection accuracy. PaddlePaddle has an integrated SSD algorithm! This example demonstrates how to use the SSD model in PaddlePaddle for object detection. We first provide a brief introduction to the SSD principle. Then we describe how to train, evaluate and test on the PASCAL VOC data set, and finally on how to use SSD on custom data set.
+Single Shot MultiBox Detector (SSD) is one of the new and enhanced detection algorithms detecting objects in images \[[1](#References)\]. SSD algorithm is characterized by rapid detection and high detection accuracy. PaddlePaddle has an integrated SSD algorithm! This example demonstrates how to use the SSD model in PaddlePaddle for object detection. We first provide a brief introduction to the SSD principle. Then we describe how to train, evaluate and test on the PASCAL VOC data set, and finally on how to use SSD on custom data set.
## SSD Architecture
-SSD uses a convolutional neural network to achieve end-to-end detection. The term "End-to-end" is used because it uses the input as the original image and the output for the test results, without the use of external tools or processes for feature extraction. One popular model of SSD is VGG16 [ 2 ]. SSD differs from VGG16 network model as in following.
+SSD uses a convolutional neural network to achieve end-to-end detection. The term "End-to-end" is used because it uses the input as the original image and the output for the test results, without the use of external tools or processes for feature extraction. One popular model of SSD is VGG16 \[[2](#References)\]. SSD differs from VGG16 network model as in following.
1. The final fc6, fc7 full connection layer into a convolution layer, convolution layer parameters through the original fc6, fc7 parameters obtained.
2. Change the parameters of the pool5 layer from 2x2-s2 (kernel size 2x2, stride size to 2) to 3x3-s1-p1 (kernel size is 3x3, stride size is 1, padding size is 1).
-3. The initial layers are composed of conv4\_3、conv7、conv8\_2、conv9\_2、conv10\_2, and pool11 layers. The main purpose of the priorbox layer is to generate a series of rectangular candidates based on the input feature map. A more detailed introduction to SSD can be found in the paper\[[1](#References)\]。
+3. The initial layers are composed of conv4\_3、conv7、conv8\_2、conv9\_2、conv10\_2, and pool11 layers. The main purpose of the priorbox layer is to generate a series of rectangular candidates based on the input feature map. A more detailed introduction to SSD can be found in the paper\[[1](#References)\].
Below is the overall structure of the model (300x300)
-图1. SSD网络结构
+Figure 1. SSD Model
-Each box in the figure represents a convolution layer, and the last two rectangles represent the summary of each convolution layer output and the post-processing phase. Specifically, the network will output a set of candidate rectangles in the prediction phase. Each rectangle contains two types of information: the position and the category score. The network produces thousands of predictions at various scales and aspect ratios before performing non-maximum suppression, resulting in a handful of final tags.
+Each box in the figure represents a convolutional layer, and the last two rectangles represent the summaries of each convolutional layer output and the post-processing phase. Specifically, the network will output a set of candidate rectangles in the prediction phase. Each rectangle contains two types of information: the position and the category score. The network produces thousands of predictions at various scales and aspect ratios before performing non-maximum suppression, resulting in a handful of final tags.
## Example Overview
This example contains the following files:
@@ -36,12 +36,12 @@ This example contains the following files:
data/prepare_voc_data.py | Prepare training PASCAL VOC data list |
-The training phase requires pre-processing of the data, including clipping, sampling, etc. This is done in ```image_util.py``` and ```data_provider.py```.```config/vgg_config.py```. ```data/prepare_voc_data.py``` is used to generate a list of files, including the training set and test set, the need to use the user to download and extract data, the default use of VOC2007 and VOC2012.
+The training phase requires pre-processing, including clipping, sampling, and other operations on the data. This is done in ```image_util.py``` and ```data_provider.py```. ```config/vgg_config.py``` configures the hyper-parameters. ```data/prepare_voc_data.py``` is used to generate a list of files, including the training set and test set, the need to use the user to download and extract data, the default use of `VOC2007` and `VOC2012`.
## PASCAL VOC Data set
### Data Preparation
-First download the data set. VOC2007\[[3](#References)\] contains both training and test data set, and VOC2012\[[4](#References)\] contains only training set. Downloaded data are stored in ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```. Next, run ```data/prepare_voc_data.py``` to generate ```trainval.txt``` and ```test.txt```. The relevant function is as following:
+First, download the data set. Dataset `VOC2007`\[[3](#References)\] contains both training and test data sets, and `VOC2012`\[[4](#References)\] contains only the training set. Downloaded data are stored in ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```. Next, run ```data/prepare_voc_data.py``` to generate ```trainval.txt``` and ```test.txt```. The relevant function is as follows:
```python
def prepare_filelist(devkit_dir, years, output_dir):
@@ -73,10 +73,10 @@ The first field is the relative path of the image file, and the second field is
### To Use Pre-trained Model
-We also provide a pre-trained model using VGG-16 with good performance. To use the model, download the file http://paddlepaddle.bj.bcebos.com/model_zoo/detection/ssd_model/vgg_model.tar.gz, and place it as ```vgg/vgg_model.tar.gz```。
+We also provide a pre-trained model using VGG-16 with good performance. To use the model, download the file http://paddlepaddle.bj.bcebos.com/model_zoo/detection/ssd_model/vgg_model.tar.gz, and place it as ```vgg/vgg_model.tar.gz```.
### Training
-Next, run ```python train.py``` to train the model. Note that this example only supports the CUDA GPU environment, and can not be trained using only CPU. This is mainly because the training is very slow using CPU only.
+Next, run ```python train.py``` to train the model. Note that this example only supports the CUDA GPU environment, and can not be trained using only CPU. This is mainly because with just CPU, training is too slow.
```python
paddle.init(use_gpu=True, trainer_count=4)
@@ -92,18 +92,18 @@ train(train_file_list='./data/trainval.txt',
init_model_path='./vgg/vgg_model.tar.gz')
```
-Below is a description about this script:
+Below is a description of this script:
1. Call ```paddle.init``` with 4 GPUs.
2. ```data_provider.Settings()``` is to pass configuration parameters. For ```config/vgg_config.py``` setting,300x300 is a typical configuration for both the accuracy and efficiency. It can be extended to 512x512 by modifying the configuration file.
-3. In ```train()```执 function, ```train_file_list``` specifies the training data list, and ```dev_file_list``` specifies the evaluation data list, and ```init_model_path``` specifies the pre-training model location.
-4. During the training process will print some log information, each training a batch will output the current number of rounds, the current batch cost and mAP (mean Average Precision. Each training pass will be saved a model to the default saved directory ```checkpoints``` (Need to be created in advance).
+3. In ```train()``` function, ```train_file_list``` specifies the training data list, and ```dev_file_list``` specifies the evaluation data list, and ```init_model_path``` specifies the pre-training model location.
+4. During the training process will print some log information, each training a batch will output the current number of rounds, the current batch cost and **Mean Average Precision** (MAP). Each training pass will be saved a model to the default saved directory ```checkpoints```, which needs to be created in advance.
-The following shows the SDD300x300 in the VOC data set.
+The following shows the performance of SDD300x300 in the VOC data set.
-图2. SSD300x300 mAP收敛曲线
+Figure 2. Convergence plot of SSD300x300 MAP
@@ -128,7 +128,7 @@ eval(
```
### Obejct Detection
-Run ```python infer.py``` to perform the object detection using the trained model.
+Run ```python infer.py``` to perform object detection using the trained model.
```python
infer(
@@ -141,7 +141,7 @@ infer(
```
-Here ```eval_file_list``` specified image path list, ```save_path``` specifies directory to save the prediction result.
+Here, ```eval_file_list``` specifies the image path list and ```save_path``` specifies the directory to save the prediction result.
```
@@ -151,7 +151,7 @@ VOCdevkit/VOC2007/JPEGImages/006936.jpg 14 0.372522 187.543615699 133.727034628
...
```
-一共包含4个字段,以tab分割,第一个字段是检测图像路径,第二字段为检测矩形框内类别,第三个字段是置信度,第四个字段是4个坐标值(以空格分割)。
+There are 4 segments in total, split by a `tab` character: The first segment is the detected image path, the second is the type of the detected rectangular frame, the third is the confidence level, while the last consists of 4 space-separated coordinates.
Below is the example after running ```python visual.py``` to visualize the model result. The default visualization of the image saved in the ```./visual_res```.
@@ -220,7 +220,7 @@ with open(label_path) as flabel:
bbox_labels.append(bbox_sample)
```
-Another important thing is to change the size of the image and the size of the object to change the configuration of the network structure. Use ```config/vgg_config.py``` to create the custom configuration file. For more details, please refer to \[[1](#References)\]。
+Another important thing is to change the size of the image and the size of the object to change the configuration of the network structure. Use ```config/vgg_config.py``` to create the custom configuration file. For more details, please refer to \[[1](#References)\].
## References
1. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg. [SSD: Single shot multibox detector](https://arxiv.org/abs/1512.02325). European conference on computer vision. Springer, Cham, 2016.
diff --git a/ssd/index.html b/ssd/index.html
index a667eda751b38db68c5d2bbf7855269fcc405a77..3c1b6f3b5c996ee65e8dd41c74a8ea7845d31cf1 100644
--- a/ssd/index.html
+++ b/ssd/index.html
@@ -43,23 +43,23 @@
# Single Shot MultiBox Detector (SSD) Object Detection
## Introduction
-Single Shot MultiBox Detector (SSD) is one of the new and enhanced detection algorithms detecting objects in images [ 1 ]. SSD algorithm is characterized by rapid detection and high detection accuracy. PaddlePaddle has an integrated SSD algorithm! This example demonstrates how to use the SSD model in PaddlePaddle for object detection. We first provide a brief introduction to the SSD principle. Then we describe how to train, evaluate and test on the PASCAL VOC data set, and finally on how to use SSD on custom data set.
+Single Shot MultiBox Detector (SSD) is one of the new and enhanced detection algorithms detecting objects in images \[[1](#References)\]. SSD algorithm is characterized by rapid detection and high detection accuracy. PaddlePaddle has an integrated SSD algorithm! This example demonstrates how to use the SSD model in PaddlePaddle for object detection. We first provide a brief introduction to the SSD principle. Then we describe how to train, evaluate and test on the PASCAL VOC data set, and finally on how to use SSD on custom data set.
## SSD Architecture
-SSD uses a convolutional neural network to achieve end-to-end detection. The term "End-to-end" is used because it uses the input as the original image and the output for the test results, without the use of external tools or processes for feature extraction. One popular model of SSD is VGG16 [ 2 ]. SSD differs from VGG16 network model as in following.
+SSD uses a convolutional neural network to achieve end-to-end detection. The term "End-to-end" is used because it uses the input as the original image and the output for the test results, without the use of external tools or processes for feature extraction. One popular model of SSD is VGG16 \[[2](#References)\]. SSD differs from VGG16 network model as in following.
1. The final fc6, fc7 full connection layer into a convolution layer, convolution layer parameters through the original fc6, fc7 parameters obtained.
2. Change the parameters of the pool5 layer from 2x2-s2 (kernel size 2x2, stride size to 2) to 3x3-s1-p1 (kernel size is 3x3, stride size is 1, padding size is 1).
-3. The initial layers are composed of conv4\_3、conv7、conv8\_2、conv9\_2、conv10\_2, and pool11 layers. The main purpose of the priorbox layer is to generate a series of rectangular candidates based on the input feature map. A more detailed introduction to SSD can be found in the paper\[[1](#References)\]。
+3. The initial layers are composed of conv4\_3、conv7、conv8\_2、conv9\_2、conv10\_2, and pool11 layers. The main purpose of the priorbox layer is to generate a series of rectangular candidates based on the input feature map. A more detailed introduction to SSD can be found in the paper\[[1](#References)\].
Below is the overall structure of the model (300x300)
-图1. SSD网络结构
+Figure 1. SSD Model
-Each box in the figure represents a convolution layer, and the last two rectangles represent the summary of each convolution layer output and the post-processing phase. Specifically, the network will output a set of candidate rectangles in the prediction phase. Each rectangle contains two types of information: the position and the category score. The network produces thousands of predictions at various scales and aspect ratios before performing non-maximum suppression, resulting in a handful of final tags.
+Each box in the figure represents a convolutional layer, and the last two rectangles represent the summaries of each convolutional layer output and the post-processing phase. Specifically, the network will output a set of candidate rectangles in the prediction phase. Each rectangle contains two types of information: the position and the category score. The network produces thousands of predictions at various scales and aspect ratios before performing non-maximum suppression, resulting in a handful of final tags.
## Example Overview
This example contains the following files:
@@ -78,12 +78,12 @@ This example contains the following files:
data/prepare_voc_data.py | Prepare training PASCAL VOC data list |
-The training phase requires pre-processing of the data, including clipping, sampling, etc. This is done in ```image_util.py``` and ```data_provider.py```.```config/vgg_config.py```. ```data/prepare_voc_data.py``` is used to generate a list of files, including the training set and test set, the need to use the user to download and extract data, the default use of VOC2007 and VOC2012.
+The training phase requires pre-processing, including clipping, sampling, and other operations on the data. This is done in ```image_util.py``` and ```data_provider.py```. ```config/vgg_config.py``` configures the hyper-parameters. ```data/prepare_voc_data.py``` is used to generate a list of files, including the training set and test set, the need to use the user to download and extract data, the default use of `VOC2007` and `VOC2012`.
## PASCAL VOC Data set
### Data Preparation
-First download the data set. VOC2007\[[3](#References)\] contains both training and test data set, and VOC2012\[[4](#References)\] contains only training set. Downloaded data are stored in ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```. Next, run ```data/prepare_voc_data.py``` to generate ```trainval.txt``` and ```test.txt```. The relevant function is as following:
+First, download the data set. Dataset `VOC2007`\[[3](#References)\] contains both training and test data sets, and `VOC2012`\[[4](#References)\] contains only the training set. Downloaded data are stored in ```data/VOCdevkit/VOC2007``` and ```data/VOCdevkit/VOC2012```. Next, run ```data/prepare_voc_data.py``` to generate ```trainval.txt``` and ```test.txt```. The relevant function is as follows:
```python
def prepare_filelist(devkit_dir, years, output_dir):
@@ -115,10 +115,10 @@ The first field is the relative path of the image file, and the second field is
### To Use Pre-trained Model
-We also provide a pre-trained model using VGG-16 with good performance. To use the model, download the file http://paddlepaddle.bj.bcebos.com/model_zoo/detection/ssd_model/vgg_model.tar.gz, and place it as ```vgg/vgg_model.tar.gz```。
+We also provide a pre-trained model using VGG-16 with good performance. To use the model, download the file http://paddlepaddle.bj.bcebos.com/model_zoo/detection/ssd_model/vgg_model.tar.gz, and place it as ```vgg/vgg_model.tar.gz```.
### Training
-Next, run ```python train.py``` to train the model. Note that this example only supports the CUDA GPU environment, and can not be trained using only CPU. This is mainly because the training is very slow using CPU only.
+Next, run ```python train.py``` to train the model. Note that this example only supports the CUDA GPU environment, and can not be trained using only CPU. This is mainly because with just CPU, training is too slow.
```python
paddle.init(use_gpu=True, trainer_count=4)
@@ -134,18 +134,18 @@ train(train_file_list='./data/trainval.txt',
init_model_path='./vgg/vgg_model.tar.gz')
```
-Below is a description about this script:
+Below is a description of this script:
1. Call ```paddle.init``` with 4 GPUs.
2. ```data_provider.Settings()``` is to pass configuration parameters. For ```config/vgg_config.py``` setting,300x300 is a typical configuration for both the accuracy and efficiency. It can be extended to 512x512 by modifying the configuration file.
-3. In ```train()```执 function, ```train_file_list``` specifies the training data list, and ```dev_file_list``` specifies the evaluation data list, and ```init_model_path``` specifies the pre-training model location.
-4. During the training process will print some log information, each training a batch will output the current number of rounds, the current batch cost and mAP (mean Average Precision. Each training pass will be saved a model to the default saved directory ```checkpoints``` (Need to be created in advance).
+3. In ```train()``` function, ```train_file_list``` specifies the training data list, and ```dev_file_list``` specifies the evaluation data list, and ```init_model_path``` specifies the pre-training model location.
+4. During the training process will print some log information, each training a batch will output the current number of rounds, the current batch cost and **Mean Average Precision** (MAP). Each training pass will be saved a model to the default saved directory ```checkpoints```, which needs to be created in advance.
-The following shows the SDD300x300 in the VOC data set.
+The following shows the performance of SDD300x300 in the VOC data set.
-图2. SSD300x300 mAP收敛曲线
+Figure 2. Convergence plot of SSD300x300 MAP
@@ -170,7 +170,7 @@ eval(
```
### Obejct Detection
-Run ```python infer.py``` to perform the object detection using the trained model.
+Run ```python infer.py``` to perform object detection using the trained model.
```python
infer(
@@ -183,7 +183,7 @@ infer(
```
-Here ```eval_file_list``` specified image path list, ```save_path``` specifies directory to save the prediction result.
+Here, ```eval_file_list``` specifies the image path list and ```save_path``` specifies the directory to save the prediction result.
```
@@ -193,7 +193,7 @@ VOCdevkit/VOC2007/JPEGImages/006936.jpg 14 0.372522 187.543615699 133.727034628
...
```
-一共包含4个字段,以tab分割,第一个字段是检测图像路径,第二字段为检测矩形框内类别,第三个字段是置信度,第四个字段是4个坐标值(以空格分割)。
+There are 4 segments in total, split by a `tab` character: The first segment is the detected image path, the second is the type of the detected rectangular frame, the third is the confidence level, while the last consists of 4 space-separated coordinates.
Below is the example after running ```python visual.py``` to visualize the model result. The default visualization of the image saved in the ```./visual_res```.
@@ -262,7 +262,7 @@ with open(label_path) as flabel:
bbox_labels.append(bbox_sample)
```
-Another important thing is to change the size of the image and the size of the object to change the configuration of the network structure. Use ```config/vgg_config.py``` to create the custom configuration file. For more details, please refer to \[[1](#References)\]。
+Another important thing is to change the size of the image and the size of the object to change the configuration of the network structure. Use ```config/vgg_config.py``` to create the custom configuration file. For more details, please refer to \[[1](#References)\].
## References
1. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg. [SSD: Single shot multibox detector](https://arxiv.org/abs/1512.02325). European conference on computer vision. Springer, Cham, 2016.