diff --git a/deploy/slim/prune/README.md b/deploy/slim/prune/README.md deleted file mode 100644 index f28d2be01be6ae896956aa543a9bead85596609c..0000000000000000000000000000000000000000 --- a/deploy/slim/prune/README.md +++ /dev/null @@ -1,40 +0,0 @@ -> 运行示例前请先安装develop版本PaddleSlim - -# 模型裁剪压缩教程 - -## 概述 - -该示例使用PaddleSlim提供的[裁剪压缩API](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/)对OCR模型进行压缩。 -在阅读该示例前,建议您先了解以下内容: - -- [OCR模型的常规训练方法](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/detection.md) -- [PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/) - -## 安装PaddleSlim -可按照[PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/)中的步骤安装PaddleSlim。 - - - -## 敏感度分析训练 - -进入PaddleOCR根目录,通过以下命令对模型进行敏感度分析: - -```bash -python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1 -``` - -## 裁剪模型与fine-tune - -```bash -python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1 -``` - - - -## 评估并导出 - -在得到裁剪训练保存的模型后,我们可以将其导出为inference_model,用于预测部署: - -```bash -python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model -``` diff --git a/deploy/slim/prune/README_ch.md b/deploy/slim/prune/README_ch.md new file mode 100644 index 0000000000000000000000000000000000000000..3f55197766188a4d2ec1d066ec2a2f722d892839 --- /dev/null +++ b/deploy/slim/prune/README_ch.md @@ -0,0 +1,180 @@ +\> 运行示例前请先安装develop版本PaddleSlim + + + +# 模型裁剪压缩教程 + +压缩结果: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
序号任务模型压缩策略[3][4]精度(自建中文数据集)耗时[1](ms)整体耗时[2](ms)加速比整体模型大小(M)压缩比例下载链接
0检测MobileNetV3_DB61.7224375-8.6-
识别MobileNetV3_CRNN62.09.52
1检测SlimTextDetPACT量化训练62.11953488%2.867.82%
识别SlimTextRecPACT量化训练61.488.6
2检测SlimTextDet_quat_pruning剪裁+PACT量化训练60.8614228830%2.867.82%
识别SlimTextRecPACT量化训练61.488.6
3检测SlimTextDet_pruning剪裁61.5713829527%2.966.28%
识别SlimTextRecPACT量化训练61.488.6
+ + +## 概述 + +复杂的模型有利于提高模型的性能,但也导致模型中存在一定冗余,模型裁剪通过移出网络模型中的子模型来减少这种冗余,达到减少模型计算复杂度,提高模型推理性能的目的。 + +该示例使用PaddleSlim提供的[裁剪压缩API](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/)对OCR模型进行压缩。 + +在阅读该示例前,建议您先了解以下内容: + + + +\- [OCR模型的常规训练方法](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/detection.md) + +\- [PaddleSlim使用文档](https://paddlepaddle.github.io/PaddleSlim/) + + + +## 安装PaddleSlim + +\```bash + +git clone https://github.com/PaddlePaddle/PaddleSlim.git + +cd Paddleslim + +python setup.py install + +\``` + + +## 获取预训练模型 +[检测预训练模型下载地址]() + + +## 敏感度分析训练 + 加载预训练模型后,通过对现有模型的每个网络层进行敏感度分析,了解各网络层冗余度,从而决定每个网络层的裁剪比例。敏感度分析的具体细节见:[敏感度分析](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/zh_cn/tutorials/image_classification_sensitivity_analysis_tutorial.md) + +进入PaddleOCR根目录,通过以下命令对模型进行敏感度分析: + +\```bash + +python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1 + +\``` + + + +## 裁剪模型与fine-tune + 裁剪时通过之前的敏感度分析文件决定每个网络层的裁剪比例。在具体实现时,为了尽可能多的保留从图像中提取的低阶特征,我们跳过了backbone中靠近输入的4个卷积层。同样,为了减少由于裁剪导致的模型性能损失,我们通过之前敏感度分析所获得的敏感度表,挑选出了一些冗余较少,对裁剪较为敏感的[网络层](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41),并在之后的裁剪过程中选择避开这些网络层。裁剪过后finetune的过程沿用OCR检测模型原始的训练策略。 + +\```bash + +python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1 + +\``` + + + + + +## 导出模型 + +在得到裁剪训练保存的模型后,我们可以将其导出为inference_model,用于预测部署: + +\```bash + +python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model + +\``` diff --git a/deploy/slim/prune/README_en.md b/deploy/slim/prune/README_en.md new file mode 100644 index 0000000000000000000000000000000000000000..d345e24c71fa5a1a017362656502b44e1a082688 --- /dev/null +++ b/deploy/slim/prune/README_en.md @@ -0,0 +1,183 @@ +\> PaddleSlim develop version should be installed before runing this example. + + + +# Model compress tutorial (Pruning) + +Compress results: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
IDTaskModelCompress Strategy[3][4]Criterion(Chinese dataset)Inference Time[1](ms)Inference Time(Total model)[2](ms)Acceleration RatioModel Size(MB)Commpress RatioDownload Link
0DetectionMobileNetV3_DBNone61.7224375-8.6-
RecognitionMobileNetV3_CRNNNone62.09.52
1DetectionSlimTextDetPACT Quant Aware Training62.11953488%2.867.82%
RecognitionSlimTextRecPACT Quant Aware Training61.488.6
2DetectionSlimTextDet_quat_pruningPruning+PACT Quant Aware Training60.8614228830%2.867.82%
RecognitionSlimTextRecPPACT Quant Aware Training61.488.6
3DetectionSlimTextDet_pruningPruning61.5713829527%2.966.28%
RecognitionSlimTextRecPACT Quant Aware Training61.488.6
+ + +## Overview + +Generally, a more complex model would achive better performance in the task, but it also leads to some redundancy in the model. Model Pruning is a technique that reduces this redundancy by removing the sub-models in the neural network model, so as to reduce model calculation complexity and improve model inference performance. + +This example uses PaddleSlim provided[APIs of Pruning](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/) to compress the OCR model. + +It is recommended that you could understand following pages before reading this example,: + + + +\- [The training strategy of OCR model](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/detection.md) + +\- [PaddleSlim Document](https://paddlepaddle.github.io/PaddleSlim/) + + + +## Install PaddleSlim + +\```bash + +git clone https://github.com/PaddlePaddle/PaddleSlim.git + +cd Paddleslim + +python setup.py install + +\``` + + +## Download Pretrain Model + +[Download link of Detection pretrain model]() + + +## Pruning sensitivity analysis + + After the pre-training model is loaded, sensitivity analysis is performed on each network layer of the model to understand the redundancy of each network layer, thereby determining the pruning ratio of each network layer. For specific details of sensitivity analysis, see:[Sensitivity analysis](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/zh_cn/tutorials/image_classification_sensitivity_analysis_tutorial.md) + +Enter the PaddleOCR root directory,perform sensitivity analysis on the model with the following command: + +\```bash + +python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1 + +\``` + + + +## Model pruning and Fine-tune + + When pruning, the previous sensitivity analysis file would determines the pruning ratio of each network layer. In the specific implementation, in order to retain as many low-level features extracted from the image as possible, we skipped the 4 convolutional layers close to the input in the backbone. Similarly, in order to reduce the model performance loss caused by pruning, we selected some of the less redundant and more sensitive [network layer](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41) through the sensitivity table obtained from the previous sensitivity analysis.And choose to skip these network layers in the subsequent pruning process. After pruning, the model need a finetune process to recover the performance and the training strategy of finetune is similar to the strategy of training original OCR detection model. + +\```bash + +python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1 + +\``` + + + + + +## Export inference model + +After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment: + +\```bash + +python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model + +\``` diff --git a/deploy/slim/quantization/README.md b/deploy/slim/quantization/README.md index f2e92f54a5b456b25445282a38fe30e01fe4fd49..f7d87c83602f69ada46b35e7d63260fe8bc6e055 100755 --- a/deploy/slim/quantization/README.md +++ b/deploy/slim/quantization/README.md @@ -25,7 +25,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global -## 评估并导出 +## 导出模型 在得到量化训练保存的模型后,我们可以将其导出为inference_model,用于预测部署: