README.md 5.1 KB
Newer Older
W
whs 已提交
1
[中文](README_cn.md) | English
W
wanghaoshuang 已提交
2

W
whs 已提交
3
Documents:https://paddlepaddle.github.io/PaddleSlim
V
vincentXiyu 已提交
4

W
whs 已提交
5
# PaddleSlim
V
vincentXiyu 已提交
6

W
whs 已提交
7
PaddleSlim is a toolkit for model compression. It contains a collection of compression strategies, such as pruning, fixed point quantization, knowledge distillation, hyperparameter searching and neural architecture search.
V
vincentXiyu 已提交
8

W
whs 已提交
9
PaddleSlim provides solutions of compression on computer vision models, such as image classification, object detection and semantic segmentation. Meanwhile, PaddleSlim Keeps exploring advanced compression strategies for language model. Furthermore, benckmark of compression strategies on some open tasks is available for your reference.
V
vincentXiyu 已提交
10

W
whs 已提交
11
PaddleSlim also provides auxiliary and primitive API for developer and researcher to survey, implement and apply the method in latest papers. PaddleSlim will support developer in ability of framework and technology consulting.
V
vincentXiyu 已提交
12

W
whs 已提交
13
## Features
V
vincentXiyu 已提交
14 15 16

### Pruning

W
whs 已提交
17 18 19 20 21
  - Uniform pruning of convolution
  - Sensitivity-based prunning
  - Automated pruning based evolution search strategy
  - Support pruning of various deep architectures such as VGG, ResNet, and MobileNet.
  - Support self-defined range of pruning, i.e., layers to be pruned.
V
vincentXiyu 已提交
22

W
whs 已提交
23
### Fixed Point Quantization
V
vincentXiyu 已提交
24

W
whs 已提交
25 26 27 28 29
  - **Training aware**
    - Dynamic strategy: During inference, we quantize models with hyperparameters dynamically estimated from small batches of samples.
    - Static strategy: During inference, we quantize models with the same hyperparameters estimated from training data.
    - Support layer-wise and channel-wise quantization.
  - **Post training**
V
vincentXiyu 已提交
30 31 32

### Knowledge Distillation

W
whs 已提交
33 34
  - **Naive knowledge distillation:** transfers dark knowledge by merging the teacher and student model into the same Program
  - **Paddle large-scale scalable knowledge distillation framework Pantheon:** a universal solution for knowledge distillation, more flexible than the naive knowledge distillation, and easier to scale to the large-scale applications.
V
vincentXiyu 已提交
35

W
whs 已提交
36 37 38 39
    - Decouple the teacher and student models --- they run in different processes in the same or different nodes, and transfer knowledge via TCP/IP ports or local files;
    - Friendly to assemble multiple teacher models and each of them can work in either online or offline mode independently;
    - Merge knowledge from different teachers and make batch data for the student model automatically;
    - Support the large-scale knowledge prediction of teacher models on multiple devices.
V
vincentXiyu 已提交
40

W
whs 已提交
41
### Neural Architecture Search
V
vincentXiyu 已提交
42

W
whs 已提交
43 44 45 46 47
  - Neural architecture search based on evolution strategy.
  - Support distributed search.
  - One-Shot neural architecture search.
  - Support FLOPs and latency constrained search.
  - Support the latency estimation on different hardware and platforms.
V
vincentXiyu 已提交
48

W
whs 已提交
49
## Install
V
vincentXiyu 已提交
50

W
whs 已提交
51
Requires:
V
vincentXiyu 已提交
52

W
whs 已提交
53
Paddle >= 1.7.0
V
vincentXiyu 已提交
54

W
whs 已提交
55 56 57
```bash
pip install paddleslim -i https://pypi.org/simple
```
W
wanghaoshuang 已提交
58

W
whs 已提交
59
## Usage
W
wanghaoshuang 已提交
60

B
Bai Yifan 已提交
61 62 63 64
- [QuickStart](https://paddlepaddle.github.io/PaddleSlim/quick_start/index_en.html): Introduce how to use PaddleSlim by simple examples.
- [Advanced Tutorials](https://paddlepaddle.github.io/PaddleSlim/tutorials/index_en.html):Tutorials about advanced usage of PaddleSlim.
- [Model Zoo](https://paddlepaddle.github.io/PaddleSlim/model_zoo_en.html):Benchmark and pretrained models.
- [API Documents](https://paddlepaddle.github.io/PaddleSlim/api_en/index_en.html)
W
whs 已提交
65 66 67
- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/tree/master/slim): Introduce how to use PaddleSlim in PaddleDetection library.
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/slim): Introduce how to use PaddleSlim in PaddleSeg library.
- [PaddleLite](https://paddlepaddle.github.io/Paddle-Lite/): How to use PaddleLite to deploy models generated by PaddleSlim.
W
wanghaoshuang 已提交
68

W
whs 已提交
69
## Performance
W
wanghaoshuang 已提交
70

W
whs 已提交
71
### Image Classification
W
wanghaoshuang 已提交
72

W
whs 已提交
73
Dataset: ImageNet2012; Model: MobileNetV1;
W
wanghaoshuang 已提交
74

W
whs 已提交
75 76 77 78 79
|Method |Accuracy(baseline: 70.91%) |Model Size(baseline: 17.0M)|
|:---:|:---:|:---:|
| Knowledge Distillation(ResNet50)| [+1.06%]() |-|
| Knowledge Distillation(ResNet50) + int8 quantization |[+1.10%]()| [-71.76%]()|
| Pruning(FLOPs-50%) + int8 quantization|[-1.71%]()|[-86.47%]()|
W
wanghaoshuang 已提交
80 81


W
whs 已提交
82
### Object Detection
W
wanghaoshuang 已提交
83

W
whs 已提交
84
#### Dataset: Pascal VOC; Model: MobileNet-V1-YOLOv3
W
wanghaoshuang 已提交
85

W
whs 已提交
86 87 88 89 90
|        Method           | mAP(baseline: 76.2%)         | Model Size(baseline: 94MB)      |
| :---------------------:   | :------------: | :------------:|
| Knowledge Distillation(ResNet34-YOLOv3) | [+2.8%]()      |       -       |
| Pruning(FLOPs -52.88%)        | [+1.4%]()      | [-67.76%]()   |
|Knowledge DistillationResNet34-YOLOv3)+Pruning(FLOPs-69.57%)| [+2.6%]()|[-67.00%]()|
W
wanghaoshuang 已提交
91 92


W
whs 已提交
93
#### Dataset: COCO; Model: MobileNet-V1-YOLOv3
W
wanghaoshuang 已提交
94

W
whs 已提交
95 96 97 98
|        Method           | mAP(baseline: 29.3%) | Model Size|
| :---------------------:   | :------------: | :------:|
| Knowledge Distillation(ResNet34-YOLOv3) |  [+2.1%]()     |-|
| Knowledge Distillation(ResNet34-YOLOv3)+Pruning(FLOPs-67.56%) | [-0.3%]() | [-66.90%]()|
W
wanghaoshuang 已提交
99

W
whs 已提交
100
### NAS
W
wanghaoshuang 已提交
101

W
whs 已提交
102
Dataset: ImageNet2012; Model: MobileNetV2
W
wanghaoshuang 已提交
103

W
whs 已提交
104 105 106 107 108
|Device           | Infer time cost | Top1 accuracy(baseline:71.90%) |
|:---------------:|:---------:|:--------------------:|
| RK3288  | [-23%]()    | +0.07%    |
| Android cellphone  | [-20%]()    | +0.16% |
| iPhone 6s   | [-17%]()    | +0.32%  |