From 3f89879c19982b1b78ce5a49d7ac3752782e1f38 Mon Sep 17 00:00:00 2001 From: LDOUBLEV Date: Mon, 21 Sep 2020 21:14:52 +0800 Subject: [PATCH] add slim introduction,test=document_fix --- deploy/slim/prune/README.md | 4 ++-- deploy/slim/prune/README_en.md | 5 +++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/deploy/slim/prune/README.md b/deploy/slim/prune/README.md index f2f4cda7..6f987ff3 100644 --- a/deploy/slim/prune/README.md +++ b/deploy/slim/prune/README.md @@ -1,9 +1,9 @@ ## 介绍 -复杂的模型有利于提高模型的性能,但也导致模型中存在一定冗余,模型裁剪通过移出网络模型中的子模型来减少这种冗余,达到减少模型计算复杂度,提高模型推理性能的目的。 +本教程将介绍如何使用飞桨模型压缩库PaddleSlim做PaddleOCR模型的压缩。 +[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)集成了模型剪枝、量化(包括量化训练和离线量化)、蒸馏和神经网络搜索等多种业界常用且领先的模型压缩功能,如果您感兴趣,可以关注并了解。 -本教程将介绍如何使用[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)裁剪PaddleOCR的模型。 在开始本教程之前,建议先了解: 1. [PaddleOCR模型的训练方法](../../../doc/doc_ch/quickstart.md) diff --git a/deploy/slim/prune/README_en.md b/deploy/slim/prune/README_en.md index 51aefec8..7adbd86c 100644 --- a/deploy/slim/prune/README_en.md +++ b/deploy/slim/prune/README_en.md @@ -1,9 +1,10 @@ ## Introduction -Complicated models help to improve the performance of the model, but it also leads to some redundancy in the model. Model tailoring reduces this redundancy by removing the sub-models in the network model, so as to reduce model calculation complexity and improve model inference performance. . +Generally, a more complex model would achive better performance in the task, but it also leads to some redundancy in the model. Model Pruning is a technique that reduces this redundancy by removing the sub-models in the neural network model, so as to reduce model calculation complexity and improve model inference performance. -This tutorial will introduce how to use PaddleSlim to crop PaddleOCR model. +This example uses PaddleSlim provided[APIs of Pruning](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/) to compress the OCR model. +[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim), an open source library which integrates model pruning, quantization (including quantization training and offline quantization), distillation, neural network architecture search, and many other commonly used and leading model compression technique in the industry. It is recommended that you could understand following pages before reading this example: 1. [PaddleOCR training methods](../../../doc/doc_ch/quickstart.md) -- GitLab