提交 6a3c583d 编写于 作者: L LDOUBLEV

fix doc and init model before qat

上级 c127f84b
...@@ -23,13 +23,13 @@ ...@@ -23,13 +23,13 @@
```bash ```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd PaddleSlim
git checkout develop git checkout develop
cd Paddleslim
python3 setup.py install python3 setup.py install
``` ```
### 2. 获取预训练模型 ### 2. 获取预训练模型
模型裁剪需要加载事先训练好的模型,PaddleOCR也提供了一系列(模型)[../../../doc/doc_ch/models_list.md],开发者可根据需要自行选择模型或使用自己的模型。 模型裁剪需要加载事先训练好的模型,PaddleOCR也提供了一系列[模型](../../../doc/doc_ch/models_list.md),开发者可根据需要自行选择模型或使用自己的模型。
### 3. 敏感度分析训练 ### 3. 敏感度分析训练
...@@ -49,14 +49,14 @@ python3 setup.py install ...@@ -49,14 +49,14 @@ python3 setup.py install
进入PaddleOCR根目录,通过以下命令对模型进行敏感度分析训练: 进入PaddleOCR根目录,通过以下命令对模型进行敏感度分析训练:
```bash ```bash
python3.7 deploy/slim/prune/sensitivity_anal.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model="your trained model" python3.7 deploy/slim/prune/sensitivity_anal.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model="your trained model" Global.save_model_dir=./output/prune_model/
``` ```
### 4. 导出模型、预测部署 ### 4. 导出模型、预测部署
在得到裁剪训练保存的模型后,我们可以将其导出为inference_model: 在得到裁剪训练保存的模型后,我们可以将其导出为inference_model:
```bash ```bash
pytho3.7 deploy/slim/prune/export_prune_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./output/det_db/best_accuracy Global.save_inference_dir=inference_model pytho3.7 deploy/slim/prune/export_prune_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./output/det_db/best_accuracy Global.save_inference_dir=./prune/prune_inference_model
``` ```
inference model的预测和部署参考: inference model的预测和部署参考:
......
...@@ -22,15 +22,15 @@ Five steps for OCR model prune: ...@@ -22,15 +22,15 @@ Five steps for OCR model prune:
```bash ```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd PaddleSlim
git checkout develop git checkout develop
cd Paddleslim
python3 setup.py install python3 setup.py install
``` ```
### 2. Download Pretrain Model ### 2. Download Pretrain Model
Model prune needs to load pre-trained models. Model prune needs to load pre-trained models.
PaddleOCR also provides a series of (models)[../../../doc/doc_en/models_list_en.md]. Developers can choose their own models or use their own models according to their needs. PaddleOCR also provides a series of [models](../../../doc/doc_en/models_list_en.md). Developers can choose their own models or use their own models according to their needs.
### 3. Pruning sensitivity analysis ### 3. Pruning sensitivity analysis
...@@ -54,7 +54,7 @@ Enter the PaddleOCR root directory,perform sensitivity analysis on the model w ...@@ -54,7 +54,7 @@ Enter the PaddleOCR root directory,perform sensitivity analysis on the model w
```bash ```bash
python3.7 deploy/slim/prune/sensitivity_anal.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model="your trained model" python3.7 deploy/slim/prune/sensitivity_anal.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model="your trained model" Global.save_model_dir=./output/prune_model/
``` ```
...@@ -63,7 +63,7 @@ python3.7 deploy/slim/prune/sensitivity_anal.py -c configs/det/ch_ppocr_v2.0/ch_ ...@@ -63,7 +63,7 @@ python3.7 deploy/slim/prune/sensitivity_anal.py -c configs/det/ch_ppocr_v2.0/ch_
We can export the pruned model as inference_model for deployment: We can export the pruned model as inference_model for deployment:
```bash ```bash
python deploy/slim/prune/export_prune_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./output/det_db/best_accuracy Global.save_inference_dir=inference_model python deploy/slim/prune/export_prune_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./output/det_db/best_accuracy Global.save_inference_dir=./prune/prune_inference_model
``` ```
Reference for prediction and deployment of inference model: Reference for prediction and deployment of inference model:
......
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
```bash ```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim cd PaddleSlim
python setup.py install python setup.py install
``` ```
...@@ -37,12 +37,12 @@ PaddleOCR提供了一系列训练好的[模型](../../../doc/doc_ch/models_list. ...@@ -37,12 +37,12 @@ PaddleOCR提供了一系列训练好的[模型](../../../doc/doc_ch/models_list.
量化训练的代码位于slim/quantization/quant.py 中,比如训练检测模型,训练指令如下: 量化训练的代码位于slim/quantization/quant.py 中,比如训练检测模型,训练指令如下:
```bash ```bash
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
# 比如下载提供的训练模型 # 比如下载提供的训练模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
tar -xf ch_ppocr_mobile_v2.0_det_train.tar tar -xf ch_ppocr_mobile_v2.0_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_inference_dir=./output/quant_inference_model python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_inference_model
``` ```
如果要训练识别模型的量化,修改配置文件和加载的模型参数即可。 如果要训练识别模型的量化,修改配置文件和加载的模型参数即可。
...@@ -52,7 +52,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global ...@@ -52,7 +52,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global
在得到量化训练保存的模型后,我们可以将其导出为inference_model,用于预测部署: 在得到量化训练保存的模型后,我们可以将其导出为inference_model,用于预测部署:
```bash ```bash
python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_model_dir=./output/quant_inference_model python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
``` ```
### 5. 量化模型部署 ### 5. 量化模型部署
......
...@@ -26,7 +26,7 @@ After training, if you want to further compress the model size and accelerate th ...@@ -26,7 +26,7 @@ After training, if you want to further compress the model size and accelerate th
```bash ```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim cd PaddlSlim
python setup.py install python setup.py install
``` ```
...@@ -43,12 +43,12 @@ After the quantization strategy is defined, the model can be quantified. ...@@ -43,12 +43,12 @@ After the quantization strategy is defined, the model can be quantified.
The code for quantization training is located in `slim/quantization/quant.py`. For example, to train a detection model, the training instructions are as follows: The code for quantization training is located in `slim/quantization/quant.py`. For example, to train a detection model, the training instructions are as follows:
```bash ```bash
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
# download provided model # download provided model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
tar -xf ch_ppocr_mobile_v2.0_det_train.tar tar -xf ch_ppocr_mobile_v2.0_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_model python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_model
``` ```
...@@ -57,7 +57,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global ...@@ -57,7 +57,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global
After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment: After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment:
```bash ```bash
python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
``` ```
### 5. Deploy ### 5. Deploy
......
...@@ -112,10 +112,6 @@ def main(config, device, logger, vdl_writer): ...@@ -112,10 +112,6 @@ def main(config, device, logger, vdl_writer):
config['Architecture']["Head"]['out_channels'] = char_num config['Architecture']["Head"]['out_channels'] = char_num
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
# prepare to quant
quanter = QAT(config=quant_config, act_preprocess=PACT)
quanter.quantize(model)
if config['Global']['distributed']: if config['Global']['distributed']:
model = paddle.DataParallel(model) model = paddle.DataParallel(model)
...@@ -136,31 +132,15 @@ def main(config, device, logger, vdl_writer): ...@@ -136,31 +132,15 @@ def main(config, device, logger, vdl_writer):
logger.info('train dataloader has {} iters, valid dataloader has {} iters'. logger.info('train dataloader has {} iters, valid dataloader has {} iters'.
format(len(train_dataloader), len(valid_dataloader))) format(len(train_dataloader), len(valid_dataloader)))
quanter = QAT(config=quant_config, act_preprocess=PACT)
quanter.quantize(model)
# start train # start train
program.train(config, train_dataloader, valid_dataloader, device, model, program.train(config, train_dataloader, valid_dataloader, device, model,
loss_class, optimizer, lr_scheduler, post_process_class, loss_class, optimizer, lr_scheduler, post_process_class,
eval_class, pre_best_model_dict, logger, vdl_writer) eval_class, pre_best_model_dict, logger, vdl_writer)
def test_reader(config, device, logger):
loader = build_dataloader(config, 'Train', device, logger)
import time
starttime = time.time()
count = 0
try:
for data in loader():
count += 1
if count % 1 == 0:
batch_time = time.time() - starttime
starttime = time.time()
logger.info("reader: {}, {}, {}".format(
count, len(data[0]), batch_time))
except Exception as e:
logger.info(e)
logger.info("finish reader: {}, Success!".format(count))
if __name__ == '__main__': if __name__ == '__main__':
config, device, logger, vdl_writer = program.preprocess(is_train=True) config, device, logger, vdl_writer = program.preprocess(is_train=True)
main(config, device, logger, vdl_writer) main(config, device, logger, vdl_writer)
# test_reader(config, device, logger)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册