未验证 提交 490c1f7f 编写于 作者: C ceci3 提交者: GitHub

update readme (#1129)

* update readme

* update

* update

* remove cv2

* update

* update
上级 80184c47
...@@ -163,6 +163,12 @@ pip install paddleslim -i https://pypi.tuna.tsinghua.edu.cn/simple ...@@ -163,6 +163,12 @@ pip install paddleslim -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddleslim==2.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple pip install paddleslim==2.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```
安装develop版本:
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git & cd PaddleSlim
python setup.py install
```
### 快速开始 ### 快速开始
...@@ -269,6 +275,9 @@ pip install paddleslim==2.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple ...@@ -269,6 +275,9 @@ pip install paddleslim==2.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
### [FAQ](docs/zh_cn/FAQ/quantization_FAQ.md) ### [FAQ](docs/zh_cn/FAQ/quantization_FAQ.md)
#### 1. 量化训练或者离线量化后的模型体积为什么没有变小?
答:这是因为量化后保存的参数是虽然是int8范围,但是类型是float。这是因为Paddle训练前向默认的Kernel不支持INT8 Kernel实现,只有Paddle Inference TensorRT的推理才支持量化推理加速。为了方便量化后验证量化精度,使用Paddle训练前向能加载此模型,默认保存的Float32类型权重,体积没有发生变换。
## 许可证书 ## 许可证书
本项目的发布受[Apache 2.0 license](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/LICENSE)许可认证。 本项目的发布受[Apache 2.0 license](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/LICENSE)许可认证。
......
...@@ -9,8 +9,8 @@ PaddleSlim推出全新自动化压缩工具(ACT),旨在通过Source-Free ...@@ -9,8 +9,8 @@ PaddleSlim推出全新自动化压缩工具(ACT),旨在通过Source-Free
## 环境准备 ## 环境准备
- 安装PaddlePaddle >= 2.2版本 (从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装) - 安装PaddlePaddle >= 2.3版本 (从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装)
- 安装PaddleSlim >= 2.3 或者适当develop版本 - 安装PaddleSlim develop版本
## 快速上手 ## 快速上手
......
...@@ -31,8 +31,8 @@ ...@@ -31,8 +31,8 @@
## 3. 自动压缩流程 ## 3. 自动压缩流程
#### 3.1 准备环境 #### 3.1 准备环境
- PaddlePaddle >= 2.2 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装) - PaddlePaddle >= 2.3 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装)
- PaddleSlim >= 2.3 或者适当develop版本 - PaddleSlim develop版本
- PaddleDet >= 2.4 - PaddleDet >= 2.4
安装paddlepaddle: 安装paddlepaddle:
......
...@@ -30,8 +30,8 @@ ...@@ -30,8 +30,8 @@
#### 3.1 准备环境 #### 3.1 准备环境
- python >= 3.6 - python >= 3.6
- PaddlePaddle >= 2.2 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装) - PaddlePaddle >= 2.3 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装)
- PaddleSlim >= 2.3 或者适当develop版本 - PaddleSlim develop版本
安装paddlepaddle: 安装paddlepaddle:
```shell ```shell
......
...@@ -42,8 +42,8 @@ PP-MiniLM是一个6层的预训练中文小模型,使用PaddleNLP中```from_pr ...@@ -42,8 +42,8 @@ PP-MiniLM是一个6层的预训练中文小模型,使用PaddleNLP中```from_pr
#### 3.1 准备环境 #### 3.1 准备环境
- python >= 3.6 - python >= 3.6
- PaddlePaddle >= 2.2 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装) - PaddlePaddle >= 2.3 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装)
- PaddleSlim >= 2.3 或者适当develop版本 - PaddleSlim develop版本
- PaddleNLP >= 2.3 - PaddleNLP >= 2.3
安装paddlepaddle: 安装paddlepaddle:
......
...@@ -34,8 +34,8 @@ ...@@ -34,8 +34,8 @@
#### 3.1 准备环境 #### 3.1 准备环境
- PaddlePaddle >= 2.2 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装) - PaddlePaddle >= 2.3 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装)
- PaddleSlim >= 2.3 或者适当develop版本 - PaddleSlim develop版本
- PaddleSeg >= 2.5 - PaddleSeg >= 2.5
安装paddlepaddle: 安装paddlepaddle:
......
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
# limitations under the License. # limitations under the License.
import os import os
import cv2
import math import math
import random import random
import numpy as np import numpy as np
......
## 量化FAQ ## 量化FAQ
1. 量化训练或者离线量化后的模型体积为什么没有变小? 1. 量化训练或者离线量化后的模型体积为什么没有变小?
2. 量化训练或者离线量化后的模型使用fluid加载为什么没有加速?怎样才能加速? 2. 量化训练或者离线量化后的模型使用paddle加载为什么没有加速?怎样才能加速?
3. 该怎么设置适合的量化配置? 3. 该怎么设置适合的量化配置?
4. 离线量化出现'KeyError: '报错 4. 离线量化出现'KeyError: '报错
5. 离线量化或者量化训练时出现CUDNN或者CUDA错误 5. 离线量化或者量化训练时出现CUDNN或者CUDA错误
...@@ -10,17 +10,17 @@ ...@@ -10,17 +10,17 @@
#### 1. 量化训练或者离线量化后的模型体积为什么没有变小? #### 1. 量化训练或者离线量化后的模型体积为什么没有变小?
答:这是因为量化后保存的参数是虽然是int8范围,但是类型是float。这是由于fluid没有int8 kernel, 为了方便量化后验证量化精度,必须能让fluid能够加载 答:这是因为量化后保存的参数是虽然是int8范围,但是类型是float。这是因为Paddle训练前向默认的Kernel不支持INT8 Kernel实现,只有Paddle Inference TensorRT的推理才支持量化推理加速。为了方便量化后验证量化精度,使用Paddle训练前向能加载此模型,默认保存的Float32类型权重,体积没有发生变换
#### 2. 量化训练或者离线量化后的模型使用fluid加载为什么没有加速?怎样才能加速? #### 2. 量化训练或者离线量化后的模型使用paddle加载为什么没有加速?怎样才能加速?
答:这是因为量化后保存的参数是虽然是int8范围,但是类型是float。fluid并不具备加速量化模型的能力。量化模型必须配合使用预测库才能加速。 答:这是因为量化后保存的参数是虽然是Int8范围,但是类型是Float32。Paddle训练前向默认的Kernel并不具备加速量化模型的能力。量化模型必须配合使用支持Int8计算的的预测库才能加速。
- 如果量化模型在ARM上线,则需要使用[Paddle-Lite](https://paddle-lite.readthedocs.io/zh/latest/index.html). - 如果量化模型在ARM上线,则需要使用[Paddle Lite](https://paddle-lite.readthedocs.io/zh/latest/index.html).
- Paddle-Lite会对量化模型进行模型转化和优化,转化方法见[链接](https://paddle-lite.readthedocs.io/zh/latest/index.html#sec-user-guides)。 - Paddle Lite会对量化模型进行模型转化和优化,转化方法见[链接](https://paddle-lite.readthedocs.io/zh/latest/index.html#sec-user-guides)。
- 转化之后可以像非量化模型一样使用[Paddle-Lite API](https://paddle-lite.readthedocs.io/zh/latest/index.html)进行加载预测。 - 转化之后可以像非量化模型一样使用[Paddle Lite API](https://paddle-lite.readthedocs.io/zh/latest/index.html)进行加载预测。
- 如果量化模型在GPU上线,则需要使用[Paddle-TensorRT 预测接口](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_guide/performance_improving/inference_improving/paddle_tensorrt_infer.html). - 如果量化模型在GPU上线,则需要使用[Paddle-TensorRT 预测接口](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_guide/performance_improving/inference_improving/paddle_tensorrt_infer.html).
...@@ -48,7 +48,7 @@ config->EnableTensorRtEngine(1 << 20 /* workspace_size*/, ...@@ -48,7 +48,7 @@ config->EnableTensorRtEngine(1 << 20 /* workspace_size*/,
| 平台 | 支持weight量化方式 | 支持activation量化方式 | 支持量化的OP | | 平台 | 支持weight量化方式 | 支持activation量化方式 | 支持量化的OP |
| ---------------- | ------------------------------ | ------------------------------------- | ------------------------------------------------------------ | | ---------------- | ------------------------------ | ------------------------------------- | ------------------------------------------------------------ |
| ARM(Paddle-Lite) | channel_wise_abs_max, abs_max | moving_average_abs_max,range_abs_max | conv2d, depthwise_conv2d, mul | | ARM(Paddle Lite) | channel_wise_abs_max, abs_max | moving_average_abs_max,range_abs_max | conv2d, depthwise_conv2d, mul |
| x86(MKL-DNN) | abs_max | moving_average_abs_max,range_abs_max | conv2d, depthwise_conv2d, mul, matmul | | x86(MKL-DNN) | abs_max | moving_average_abs_max,range_abs_max | conv2d, depthwise_conv2d, mul, matmul |
| GPU(TensorRT) | channel_wise_abs_max | moving_average_abs_max,range_abs_max | mul, conv2d, pool2d, depthwise_conv2d, elementwise_add, leaky_relu | | GPU(TensorRT) | channel_wise_abs_max | moving_average_abs_max,range_abs_max | mul, conv2d, pool2d, depthwise_conv2d, elementwise_add, leaky_relu |
......
...@@ -71,8 +71,8 @@ EXPERIENCE_STRATEGY_WITHOUT_LOSS = [ ...@@ -71,8 +71,8 @@ EXPERIENCE_STRATEGY_WITHOUT_LOSS = [
'prune_0.3_int8' 'prune_0.3_int8'
] ]
MAGIC_SPARSE_RATIO = 0.75 MAGIC_SPARSE_RATIO = 0.75
### TODO: 0.03 threshold maybe not suitable, need to check ### TODO: 0.02 threshold maybe not suitable, need to check
MAGIC_MAX_EMD_DISTANCE = 0.03 MAGIC_MAX_EMD_DISTANCE = 0.02
MAGIC_MIN_EMD_DISTANCE = 0.01 MAGIC_MIN_EMD_DISTANCE = 0.01
DEFAULT_TRANSFORMER_STRATEGY = 'prune_0.25_int8' DEFAULT_TRANSFORMER_STRATEGY = 'prune_0.25_int8'
......
...@@ -18,12 +18,11 @@ import sys ...@@ -18,12 +18,11 @@ import sys
import numpy as np import numpy as np
import inspect import inspect
import shutil import shutil
from collections import namedtuple, Iterable from collections import namedtuple
from collections.abc import Iterable
import platform import platform
import paddle import paddle
import paddle.distributed.fleet as fleet import paddle.distributed.fleet as fleet
if platform.system().lower() == 'linux':
from ..quant import quant_post_hpo
from ..quant.quanter import convert, quant_post from ..quant.quanter import convert, quant_post
from ..common.recover_program import recover_inference_program from ..common.recover_program import recover_inference_program
from ..common import get_logger from ..common import get_logger
...@@ -35,6 +34,12 @@ from .auto_strategy import prepare_strategy, get_final_quant_config, create_stra ...@@ -35,6 +34,12 @@ from .auto_strategy import prepare_strategy, get_final_quant_config, create_stra
_logger = get_logger(__name__, level=logging.INFO) _logger = get_logger(__name__, level=logging.INFO)
try:
if platform.system().lower() == 'linux':
from ..quant.quant_post_hpo import quant_post_hpo
except Exception as e:
_logger.warning(e)
class AutoCompression: class AutoCompression:
def __init__(self, def __init__(self,
...@@ -113,7 +118,7 @@ class AutoCompression: ...@@ -113,7 +118,7 @@ class AutoCompression:
self.train_dataloader = train_dataloader self.train_dataloader = train_dataloader
self.target_speedup = target_speedup self.target_speedup = target_speedup
self.eval_function = eval_callback self.eval_function = eval_callback
self.eval_dataloader = eval_dataloader self.eval_dataloader = eval_dataloader if eval_dataloader is not None else train_dataloader
paddle.enable_static() paddle.enable_static()
......
...@@ -99,8 +99,8 @@ def _find_next_target_op(op, graph, target_op_idx, sc_path): ...@@ -99,8 +99,8 @@ def _find_next_target_op(op, graph, target_op_idx, sc_path):
def is_shortcut(op, graph, sc_path, shortcut_start_op): def is_shortcut(op, graph, sc_path, shortcut_start_op):
""" """
op /```````````````````\ add op /```````````````````\\ add
\____op1___op2__..._/ \\____op1___op2__..._/
""" """
inps = op.all_inputs() inps = op.all_inputs()
pre_ops = graph.pre_ops(op) pre_ops = graph.pre_ops(op)
......
...@@ -116,7 +116,7 @@ class SuperConv2D(fluid.dygraph.Conv2D): ...@@ -116,7 +116,7 @@ class SuperConv2D(fluid.dygraph.Conv2D):
of conv2d. If it is set to None or one attribute of ParamAttr, conv2d of conv2d. If it is set to None or one attribute of ParamAttr, conv2d
will create ParamAttr as param_attr. If the Initializer of the param_attr will create ParamAttr as param_attr. If the Initializer of the param_attr
is not set, the parameter is initialized with :math:`Normal(0.0, std)`, is not set, the parameter is initialized with :math:`Normal(0.0, std)`,
and the :math:`std` is :math:`(\\frac{2.0 }{filter\_elem\_num})^{0.5}`. Default: None. and the :math:`std` is :math:`(\\frac{2.0 }{filter\\_elem\\_num})^{0.5}`. Default: None.
bias_attr (ParamAttr or bool, optional): The attribute for the bias of conv2d. bias_attr (ParamAttr or bool, optional): The attribute for the bias of conv2d.
If it is set to False, no bias will be added to the output units. If it is set to False, no bias will be added to the output units.
If it is set to None or one attribute of ParamAttr, conv2d If it is set to None or one attribute of ParamAttr, conv2d
...@@ -374,7 +374,7 @@ class SuperConv2DTranspose(fluid.dygraph.Conv2DTranspose): ...@@ -374,7 +374,7 @@ class SuperConv2DTranspose(fluid.dygraph.Conv2DTranspose):
`conv2dtranspose <http://www.matthewzeiler.com/wp-content/uploads/2017/07/cvpr2010.pdf>`_ . `conv2dtranspose <http://www.matthewzeiler.com/wp-content/uploads/2017/07/cvpr2010.pdf>`_ .
For each input :math:`X`, the equation is: For each input :math:`X`, the equation is:
.. math:: .. math::
Out = \sigma (W \\ast X + b) Out = \\sigma (W \\ast X + b)
Where: Where:
* :math:`X`: Input value, a ``Tensor`` with NCHW format. * :math:`X`: Input value, a ``Tensor`` with NCHW format.
* :math:`W`: Filter value, a ``Tensor`` with shape [MCHW] . * :math:`W`: Filter value, a ``Tensor`` with shape [MCHW] .
...@@ -390,10 +390,10 @@ class SuperConv2DTranspose(fluid.dygraph.Conv2DTranspose): ...@@ -390,10 +390,10 @@ class SuperConv2DTranspose(fluid.dygraph.Conv2DTranspose):
Output shape: :math:`(N, C_{out}, H_{out}, W_{out})` Output shape: :math:`(N, C_{out}, H_{out}, W_{out})`
Where Where
.. math:: .. math::
H^\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\\\ H^\\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\\\
W^\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1 \\\\ W^\\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1 \\\\
H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ) \\\\ H_{out} &\\in [ H^\\prime_{out}, H^\\prime_{out} + strides[0] ) \\\\
W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] ) W_{out} &\\in [ W^\\prime_{out}, W^\\prime_{out} + strides[1] )
Parameters: Parameters:
num_channels(int): The number of channels in the input image. num_channels(int): The number of channels in the input image.
num_filters(int): The number of the filter. It is as same as the output num_filters(int): The number of the filter. It is as same as the output
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
"""quant post with hyper params search""" """quant post with hyper params search"""
import os import os
import cv2
import sys import sys
import math import math
import time import time
......
#coding:utf-8
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" PaddleSlim version string """
__all__ = ["slim_version"]
slim_version = "2.3.0"
...@@ -20,7 +20,8 @@ import platform ...@@ -20,7 +20,8 @@ import platform
from setuptools import find_packages from setuptools import find_packages
from setuptools import setup from setuptools import setup
from paddleslim.version import slim_version
slim_version = "2.3.0"
def python_version(): def python_version():
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册