未验证 提交 b05085ed 编写于 作者: Z zhouzj 提交者: GitHub

adjust seg's demo. (#1105)

上级 c3de12a4
...@@ -82,15 +82,37 @@ tar -xzf ppseg_lite_portrait_398x224_with_softmax.tar.gz ...@@ -82,15 +82,37 @@ tar -xzf ppseg_lite_portrait_398x224_with_softmax.tar.gz
#### 3.4 自动压缩并产出模型 #### 3.4 自动压缩并产出模型
自动压缩示例通过run.py脚本启动,会使用接口```paddleslim.auto_compression.AutoCompression```对模型进行自动压缩。配置config文件中模型路径、数据集路径、蒸馏、量化、稀疏化和训练等部分的参数,配置完成后便可对模型进行非结构化稀疏、蒸馏和量化、蒸馏。 自动压缩示例通过run.py脚本启动,会使用接口```paddleslim.auto_compression.AutoCompression```对模型进行自动压缩。首先要配置config文件中模型路径、数据集路径、蒸馏、量化、稀疏化和训练等部分的参数,配置完成后便可对模型进行非结构化稀疏、蒸馏和量化、蒸馏。
当只设置训练参数,并传入``deploy_hardware``字段时,将自动搜索压缩策略进行压缩。以骁龙710(SD710)为部署硬件,进行自动压缩的运行命令如下:
```shell
python run.py \
--model_dir='./ppseg_lite_portrait_398x224_with_softmax' \
--model_filename='model.pdmodel' \
--params_filename='model.pdiparams' \
--save_dir='./save_model' \
--config_path='configs/pp_humanseg_auto.yaml'
--deploy_hardware='SD710'
```
- 自行配置稀疏参数进行非结构化稀疏和蒸馏训练,配置参数含义详见[自动压缩超参文档](https://github.com/PaddlePaddle/PaddleSlim/blob/27dafe1c722476f1b16879f7045e9215b6f37559/demo/auto_compression/hyperparameter_tutorial.md)。具体命令如下所示:
```shell
python run.py \
--model_dir='./ppseg_lite_portrait_398x224_with_softmax' \
--model_filename='model.pdmodel' \
--params_filename='model.pdiparams' \
--save_dir='./save_model' \
--config_path='configs/pp_humanseg_sparse_dis.yaml'
```
- 自行配置量化参数进行量化和蒸馏训练,配置参数含义详见[自动压缩超参文档](https://github.com/PaddlePaddle/PaddleSlim/blob/27dafe1c722476f1b16879f7045e9215b6f37559/demo/auto_compression/hyperparameter_tutorial.md)。具体命令如下所示:
```shell ```shell
python run.py \ python run.py \
--model_dir='inference_model' \ --model_dir='./ppseg_lite_portrait_398x224_with_softmax' \
--model_filename='inference.pdmodel' \ --model_filename='model.pdmodel' \
--params_filename='./inference.pdiparams' \ --params_filename='model.pdiparams' \
--save_dir='./save_model' \ --save_dir='./save_model' \
--config_path='configs/humanseg_sparse_dis.yaml' --config_path='configs/pp_humanseg_quant_dis.yaml'
``` ```
压缩完成后会在`save_dir`中产出压缩好的预测模型,可直接预测部署。 压缩完成后会在`save_dir`中产出压缩好的预测模型,可直接预测部署。
......
Global:
reader_config: configs/pp_humanseg_lite.yml
TrainConfig:
epochs: 14
eval_iter: 400
learning_rate: 5.0e-03
optim_args:
weight_decay: 0.0005
optimizer: SGD
\ No newline at end of file
batch_size: 128
train_dataset:
type: Dataset
dataset_root: data/humanseg
train_path: data/humanseg/train.txt
num_classes: 2
transforms:
- type: PaddingByAspectRatio
aspect_ratio: 1.77777778
- type: Resize
target_size: [398, 224]
- type: ResizeStepScaling
scale_step_size: 0
- type: RandomRotation
- type: RandomPaddingCrop
crop_size: [398, 224]
- type: RandomHorizontalFlip
- type: RandomDistort
- type: RandomBlur
prob: 0.3
- type: Normalize
mode: train
val_dataset:
type: Dataset
dataset_root: data/humanseg
train_path: data/humanseg/val.txt
num_classes: 2
transforms:
- type: PaddingByAspectRatio
aspect_ratio: 1.77777778
- type: Resize
target_size: [398, 224]
- type: Normalize
mode: val
Global:
reader_config: configs/pp_humanseg_lite.yml
Distillation: Distillation:
distill_lambda: 1.0 distill_lambda: 1.0
distill_loss: l2_loss distill_loss: l2_loss
distill_node_pair: distill_node_pair:
- teacher_reshape2_1.tmp_0 - teacher_batch_norm_47.tmp_2
- reshape2_1.tmp_0 - batch_norm_47.tmp_2
- teacher_reshape2_3.tmp_0
- reshape2_3.tmp_0
- teacher_reshape2_5.tmp_0
- reshape2_5.tmp_0
- teacher_reshape2_7.tmp_0 #block1
- reshape2_7.tmp_0
- teacher_reshape2_9.tmp_0
- reshape2_9.tmp_0
- teacher_reshape2_11.tmp_0
- reshape2_11.tmp_0
- teacher_reshape2_13.tmp_0
- reshape2_13.tmp_0
- teacher_reshape2_15.tmp_0
- reshape2_15.tmp_0
- teacher_reshape2_17.tmp_0
- reshape2_17.tmp_0
- teacher_reshape2_19.tmp_0
- reshape2_19.tmp_0
- teacher_reshape2_21.tmp_0
- reshape2_21.tmp_0
- teacher_depthwise_conv2d_14.tmp_0 # block2
- depthwise_conv2d_14.tmp_0
- teacher_depthwise_conv2d_15.tmp_0
- depthwise_conv2d_15.tmp_0
- teacher_reshape2_23.tmp_0 #block3
- reshape2_23.tmp_0
- teacher_relu_30.tmp_0 # final_conv
- relu_30.tmp_0
- teacher_bilinear_interp_v2_1.tmp_0
- bilinear_interp_v2_1.tmp_0
merge_feed: true merge_feed: true
teacher_model_dir: ./inference_model teacher_model_dir: ./ppseg_lite_portrait_398x224_with_softmax
teacher_model_filename: inference.pdmodel teacher_model_filename: model.pdmodel
teacher_params_filename: inference.pdiparams teacher_params_filename: model.pdiparams
Quantization: Quantization:
activation_bits: 8 activation_bits: 8
is_full_quantize: false is_full_quantize: false
......
Global:
reader_config: configs/pp_humanseg_lite.yml
Distillation: Distillation:
distill_lambda: 1.0 distill_lambda: 1.0
distill_loss: l2_loss distill_loss: l2_loss
distill_node_pair: distill_node_pair:
- teacher_reshape2_1.tmp_0 - teacher_batch_norm_47.tmp_2
- reshape2_1.tmp_0 - batch_norm_47.tmp_2
- teacher_reshape2_3.tmp_0
- reshape2_3.tmp_0
- teacher_reshape2_5.tmp_0
- reshape2_5.tmp_0
- teacher_reshape2_7.tmp_0 #block1
- reshape2_7.tmp_0
- teacher_reshape2_9.tmp_0
- reshape2_9.tmp_0
- teacher_reshape2_11.tmp_0
- reshape2_11.tmp_0
- teacher_reshape2_13.tmp_0
- reshape2_13.tmp_0
- teacher_reshape2_15.tmp_0
- reshape2_15.tmp_0
- teacher_reshape2_17.tmp_0
- reshape2_17.tmp_0
- teacher_reshape2_19.tmp_0
- reshape2_19.tmp_0
- teacher_reshape2_21.tmp_0
- reshape2_21.tmp_0
- teacher_depthwise_conv2d_14.tmp_0 # block2
- depthwise_conv2d_14.tmp_0
- teacher_depthwise_conv2d_15.tmp_0
- depthwise_conv2d_15.tmp_0
- teacher_reshape2_23.tmp_0 #block1
- reshape2_23.tmp_0
- teacher_relu_30.tmp_0 # final_conv
- relu_30.tmp_0
- teacher_bilinear_interp_v2_1.tmp_0
- bilinear_interp_v2_1.tmp_0
merge_feed: true merge_feed: true
teacher_model_dir: ./inference_model teacher_model_dir: ./ppseg_lite_portrait_398x224_with_softmax
teacher_model_filename: inference.pdmodel teacher_model_filename: model.pdmodel
teacher_params_filename: inference.pdiparams teacher_params_filename: model.pdiparams
UnstructurePrune: UnstructurePrune:
prune_strategy: gmp prune_strategy: gmp
prune_mode: ratio prune_mode: ratio
......
...@@ -3,17 +3,12 @@ import argparse ...@@ -3,17 +3,12 @@ import argparse
import random import random
import paddle import paddle
import numpy as np import numpy as np
import paddleseg.transforms as T from paddleseg.cvlibs import Config
from paddleseg.datasets import Dataset
from paddleseg.utils import worker_init_fn from paddleseg.utils import worker_init_fn
from paddleslim.auto_compression.config_helpers import load_config from paddleslim.auto_compression.config_helpers import load_config
from paddleslim.auto_compression import AutoCompression from paddleslim.auto_compression import AutoCompression
from paddleseg.core.infer import reverse_transform from paddleseg.core.infer import reverse_transform
from paddleseg.utils import metrics from paddleseg.utils import metrics
import paddle.nn.functional as F
import cv2
import paddle.fluid as fluid
def parse_args(): def parse_args():
...@@ -43,6 +38,11 @@ def parse_args(): ...@@ -43,6 +38,11 @@ def parse_args():
type=str, type=str,
default=None, default=None,
help="path of compression strategy config.") help="path of compression strategy config.")
parser.add_argument(
'--deploy_hardware',
type=str,
default=None,
help="The hardware you want to deploy.")
return parser.parse_args() return parser.parse_args()
...@@ -65,7 +65,6 @@ def eval_function(exe, compiled_test_program, test_feed_names, test_fetch_list): ...@@ -65,7 +65,6 @@ def eval_function(exe, compiled_test_program, test_feed_names, test_fetch_list):
print("Start evaluating (total_samples: {}, total_iters: {})...".format( print("Start evaluating (total_samples: {}, total_iters: {})...".format(
len(eval_dataset), total_iters)) len(eval_dataset), total_iters))
print("nranks:", nranks)
for iter, (image, label) in enumerate(loader): for iter, (image, label) in enumerate(loader):
paddle.enable_static() paddle.enable_static()
...@@ -149,34 +148,22 @@ if __name__ == '__main__': ...@@ -149,34 +148,22 @@ if __name__ == '__main__':
args = parse_args() args = parse_args()
transforms = [T.RandomPaddingCrop(crop_size=(512, 512)), T.Normalize()] compress_config, train_config = load_config(args.config_path)
train_dataset = Dataset( cfg = Config(compress_config['reader_config'])
transforms=transforms,
dataset_root='dataset_root', # Need to fill in
num_classes=2,
train_path='train_path', # Need to fill in
mode='train')
eval_dataset = Dataset(
transforms=transforms,
dataset_root='dataset_root', # Need to fill in
num_classes=2,
train_path='val_path', # Need to fill in
mode='val')
train_dataset = cfg.train_dataset
eval_dataset = cfg.val_dataset
batch_sampler = paddle.io.DistributedBatchSampler( batch_sampler = paddle.io.DistributedBatchSampler(
train_dataset, batch_size=128, shuffle=True, drop_last=True) train_dataset, batch_size=cfg.batch_size, shuffle=True, drop_last=True)
train_loader = paddle.io.DataLoader( train_loader = paddle.io.DataLoader(
train_dataset, train_dataset,
batch_sampler=batch_sampler, batch_sampler=batch_sampler,
num_workers=2, num_workers=2,
return_list=True, return_list=True,
worker_init_fn=worker_init_fn, ) worker_init_fn=worker_init_fn)
train_dataloader = reader_wrapper(train_loader) train_dataloader = reader_wrapper(train_loader)
# set auto_compression # set auto_compression
compress_config, train_config = load_config(args.config_path)
ac = AutoCompression( ac = AutoCompression(
model_dir=args.model_dir, model_dir=args.model_dir,
model_filename=args.model_filename, model_filename=args.model_filename,
...@@ -185,6 +172,7 @@ if __name__ == '__main__': ...@@ -185,6 +172,7 @@ if __name__ == '__main__':
strategy_config=compress_config, strategy_config=compress_config,
train_config=train_config, train_config=train_config,
train_dataloader=train_dataloader, train_dataloader=train_dataloader,
eval_callback=eval_function) eval_callback=eval_function,
deploy_hardware=args.deploy_hardware)
ac.compress() ac.compress()
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册