diff --git a/README.md b/README.md
index 3bba80b618e21850cc3fc9d641392f22b438048f..654363534f3c97d13c938b032d4fdee578dadb06 100644
--- a/README.md
+++ b/README.md
@@ -31,97 +31,101 @@ changes.
- Performance Optimized:
With the help of the underlying PaddlePaddle framework, faster training and
-reduced GPU memory footprint is achieved. Notably, Yolo V3 training is
+reduced GPU memory footprint is achieved. Notably, YOLOv3 training is
much faster compared to other frameworks. Another example is Mask-RCNN
(ResNet50), we managed to fit up to 4 images per GPU (Tesla V100 16GB) during
multi-GPU training.
Supported Architectures:
-| | ResNet | ResNet-vd [1](#vd) | ResNeXt-vd | SENet | MobileNet | DarkNet | VGG |
-|--------------------|:------:|------------------------------:|:----------:|:-----:|:---------:|:-------:|:---:|
-| Faster R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
-| Faster R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
-| Mask R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
-| Mask R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
-| Cascade R-CNN | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
-| RetinaNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
-| Yolov3 | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
-| SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
+| | ResNet | ResNet-vd [1](#vd) | ResNeXt-vd | SENet | MobileNet | DarkNet | VGG |
+| ------------------- | :----: | ----------------------------: | :--------: | :---: | :-------: | :-----: | :--: |
+| Faster R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
+| Faster R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
+| Mask R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
+| Mask R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
+| Cascade Faster-RCNN | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ |
+| Cascade Mask-RCNN | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
+| RetinaNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
+| YOLOv3 | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
+| SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
[1] [ResNet-vd](https://arxiv.org/pdf/1812.01187) models offer much improved accuracy with negligible performance cost.
Advanced Features:
-- [x] **Synchronized Batch Norm**: currently used by Yolo V3.
-- [x] **Group Norm**: pretrained models to be released.
-- [x] **Modulated Deformable Convolution**: pretrained models to be released.
-- [x] **Deformable PSRoI Pooling**: pretrained models to be released.
+- [x] **Synchronized Batch Norm**: currently used by YOLOv3.
+- [x] **Group Norm**
+- [x] **Modulated Deformable Convolution**
+- [x] **Deformable PSRoI Pooling**
**NOTE:** Synchronized batch normalization can only be used on multiple GPU devices, can not be used on CPU devices or single GPU device.
+## Get Started
-## Model zoo
-
-Pretrained models are available in the PaddlePaddle [PaddleDetection model zoo](docs/MODEL_ZOO.md).
-
+- [Installation guide](docs/INSTALL.md)
+- [Quick Start on small dataset](docs/QUICK_STARTED.md)
+- [Guide to traing, evaluate and arguments description](docs/GETTING_STARTED.md)
+- [Guide to preprocess pipeline and custom dataset](docs/DATA.md)
+- [Introduction to the configuration workflow](docs/CONFIG.md)
+- [Examples for detailed configuration explanation](docs/config_example/)
+- [IPython Notebook demo](demo/mask_rcnn_demo.ipynb)
+- [Transfer learning document](docs/TRANSFER_LEARNING.md)
-## Installation
+## Model Zoo
-Please follow the [installation guide](docs/INSTALL.md).
+- Pretrained models are available in the [PaddleDetection model zoo](docs/MODEL_ZOO.md).
+- [Face detection models](configs/face_detection/README.md)
+- [Pretrained models for pedestrian and vehicle detection](contrib/README.md)
+## Model compression
-## Get Started
+- [ Quantification aware training example](slim/quantization)
+- [ Pruning compression example](slim/prune)
-For inference, simply run the following command and the visualized result will
-be saved in `output`.
+## Depoly
-```bash
-export PYTHONPATH=`pwd`:$PYTHONPATH
-python tools/infer.py -c configs/mask_rcnn_r50_1x.yml \
- -o weights=https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_1x.tar \
- --infer_img=demo/000000570688.jpg
-```
+- [C++ inference depolyment](inference/README.md)
-For detailed training and evaluation workflow, please refer to [GETTING_STARTED.md](docs/GETTING_STARTED.md).
+## Benchmark
-For detailed configuration and parameter description, please refer to [Complete config files](docs/config_example/)
+- [Inference benchmark](docs/BENCHMARK_INFER_cn.md)
-We also recommend users to take a look at the [IPython Notebook demo](demo/mask_rcnn_demo.ipynb)
-Further information can be found in these documentations:
+## Updates
-- [Introduction to the configuration workflow.](docs/CONFIG.md)
-- [Guide to custom dataset and preprocess pipeline.](docs/DATA.md)
+#### 10/2019
+- Face detection models included: BlazeFace, Faceboxes.
+- Enrich COCO models, box mAP up to 51.9%.
+- Add CACacascade RCNN, one of the best single model of Objects365 2019 challenge Full Track champion.
+- Add pretrained models for pedestrian and vehicle detection.
+- Support mixed-precision training.
+- Add C++ inference depolyment.
+- Add model compression examples.
-## Todo List
+#### 2/9/2019
-Please note this is a work in progress, substantial changes may come in the
-near future.
-Some of the planned features include:
+- Add retrained models for GroupNorm.
-- [ ] Mixed precision training.
-- [ ] Distributed training.
-- [ ] Inference in 8-bit mode.
-- [ ] User defined operations.
-- [ ] Larger model zoo.
+- Add Cascade-Mask-RCNN+FPN.
+#### 5/8/2019
-## Updates
+- Add a series of models ralated modulated Deformable Convolution.
#### 7/29/2019
- Update Chinese docs for PaddleDetection
- Fix bug in R-CNN models when train and test at the same time
- Add ResNext101-vd + Mask R-CNN + FPN models
-- Add Yolo v3 on VOC models
+- Add YOLOv3 on VOC models
#### 7/3/2019
- Initial release of PaddleDetection and detection model zoo
- Models included: Faster R-CNN, Mask R-CNN, Faster R-CNN+FPN, Mask
- R-CNN+FPN, Cascade-Faster-RCNN+FPN, RetinaNet, Yolo v3, and SSD.
+ R-CNN+FPN, Cascade-Faster-RCNN+FPN, RetinaNet, YOLOv3, and SSD.
## Contributing
diff --git a/README_cn.md b/README_cn.md
index a4d5f8675652bd751410da9eab30b868cc884e4a..293b65959ec1cc4ef3878effcf45d1fc641b7355 100644
--- a/README_cn.md
+++ b/README_cn.md
@@ -65,7 +65,7 @@ PaddleDetection的目的是为工业界和学术界提供丰富、易用的目
## 模型库
- [模型库](docs/MODEL_ZOO_cn.md)
-- [人脸检测模型](configs/face_detection/README_cn.md)
+- [人脸检测模型](configs/face_detection/README.md)
- [行人检测和车辆检测预训练模型](contrib/README_cn.md)
diff --git a/docs/CONFIG.md b/docs/CONFIG.md
index ea05b3978dd245c7737948ede09211247a201afc..3cba54eb546cfb648cc7b5bd2e135652a040b309 100644
--- a/docs/CONFIG.md
+++ b/docs/CONFIG.md
@@ -1,3 +1,5 @@
+English | [简体中文](CONFIG_cn.md)
+
# Config Pipline
## Introduction
diff --git a/docs/DATA.md b/docs/DATA.md
index 0eb686474be7ece55034082b8962948ff7320a86..be9048c0fb59cda8305462006ad53dff6c039631 100644
--- a/docs/DATA.md
+++ b/docs/DATA.md
@@ -1,3 +1,5 @@
+English | [简体中文](DATA_cn.md)
+
# Data Pipline
## Introduction
diff --git a/docs/GETTING_STARTED.md b/docs/GETTING_STARTED.md
index 0e6505f3da10d72a34ed3984bff1d1db182986b5..82ab2e98e9801821aeb3fc359fc25c54699137b8 100644
--- a/docs/GETTING_STARTED.md
+++ b/docs/GETTING_STARTED.md
@@ -1,3 +1,5 @@
+English | [简体中文](GETTING_STARTED_cn.md)
+
# Getting Started
For setting up the running environment, please refer to [installation
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index 4af7bc96ef21d895f9ea12caa714724576c8ef25..0761a240b51ea22bcfec5a9d6b70560f1ad17de3 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -1,3 +1,5 @@
+English | [简体中文](INSTALL_cn.md)
+
# Installation
---
diff --git a/docs/MODEL_ZOO.md b/docs/MODEL_ZOO.md
index c63050b0cead2881a2768be2ef32a67916c88257..d6042ada1293ea77a1670871bbff1d6f94f8a163 100644
--- a/docs/MODEL_ZOO.md
+++ b/docs/MODEL_ZOO.md
@@ -1,3 +1,5 @@
+English | [简体中文](MODEL_ZOO_cn.md)
+
# Model Zoo and Benchmark
## Environment
diff --git a/docs/TRANSFER_LEARNING.md b/docs/TRANSFER_LEARNING.md
index ec2d38e600e76a2a1238a1c9d988672a8392e42c..0bc0377acb749ee896050660ba122a3a77ca20b7 100644
--- a/docs/TRANSFER_LEARNING.md
+++ b/docs/TRANSFER_LEARNING.md
@@ -1,3 +1,5 @@
+English | [简体中文](TRANSFER_LEARNING_cn.md)
+
# Transfer Learning
Transfer learning aims at learning new knowledge from existing knowledge. For example, take pretrained model from ImageNet to initialize detection models, or take pretrained model from COCO dataset to initialize train detection models in PascalVOC dataset.