提交 0de72f5d 编写于 作者: L LDOUBLEV

opt doc

上级 6e87c904
......@@ -79,7 +79,9 @@ inference_lite_lib.android.armv8/
Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括量化、子图融合、混合调度、Kernel优选等方法,使用Paddle-lite的opt工具可以自动
对inference模型进行优化,优化后的模型更轻量,模型运行速度更快。
下述表格中提供了一系列移动端模型:
如果已经准备好了 `.nb` 结尾的模型文件,可以跳过此步骤。
下述表格中也提供了一系列中文移动端模型:
|模型版本|模型简介|模型大小|检测模型|文本方向分类模型|识别模型|Paddle-Lite版本|
|-|-|-|-|-|-|-|
......
......@@ -56,13 +56,14 @@ inference_lite_lib.android.armv8/
```
## 4. Inference Model Optimization
Paddle Lite provides a variety of strategies to automatically optimize the original training model, including quantization, sub-graph fusion, hybrid scheduling, Kernel optimization and so on. In order to make the optimization process more convenient and easy to use, Paddle Lite provide opt tools to automatically complete the optimization steps and output a lightweight, optimal executable model.
If you use PaddleOCR 8.6M OCR model to deploy, you can directly download the optimized model.
If you have prepared the model file ending in `.nb`, you can skip this step.
The following table also provides a series of models that can be deployed on mobile phones to recognize Chinese.
You can directly download the optimized model.
|Version|Introduction|Model size|Detection model|Text Direction model|Recognition model|Paddle Lite branch |
|-|-|-|-|-|-|
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册