提交 28e094e0 编写于 作者: L LittleMoon 提交者: cuicheng01

add docs, config, model for DSNet

上级 830852e7
# DSNet
---
## 目录
- [1. 模型介绍](#1)
- [1.1 模型简介](#1.1)
- [1.2 模型细节](#1.2)
- [1.2.1 尺度内传播模块](#1.2.1)
- [1.2.2 尺度间对齐模块](#1.2.2)
- [1.3 实验结果](#1.3)
- [2. 模型快速体验](#2)
- [3. 模型训练、评估和预测](#3)
- [4. 模型推理部署](#4)
- [4.1 推理模型准备](#4.1)
- [4.2 基于 Python 预测引擎推理](#4.2)
- [4.3 基于 C++ 预测引擎推理](#4.3)
- [4.4 服务化部署](#4.4)
- [4.5 端侧部署](#4.5)
- [4.6 Paddle2ONNX 模型转换与预测](#4.6)
- [5. 引用](#5)
<a name="1"></a>
## 1. 模型介绍
### 1.1 模型简介
​ 具有优越的全局表示能力的Transformer在视觉任务中取得了具有竞争性的结果。但是,这些基于Transformer的模型未能考虑输入图像中的细粒度更高的局部信息。现有的一些工作,如ContNet、CrossViT、CvT、PVT等尝试以不同的方式将卷积引入到Transformer中,从而兼顾局部特征与图片全局信息。然而,这些方法要么依次执行卷积和注意力机制,要么只是在注意力机制中用卷积投影代替线性投影,且关注局部模式的卷积操作与关注全局模式的注意力机制可能会在训练过程中产生冲突,妨碍了两者的优点合并。
​ 本文提出了一种通用的双流网络(Dual-Stream Network, DSNet),以充分探索用于图像分类的局部和全局模式特征的表示能力。DSNet可以并行地同时计算细粒度的局部特征和集成的全局特征,并有效地对两种特征进行融合。具体而言,我们提出了一个尺度内传播模块(intra-scale propagation)来处理每个DS-Block中的具有两种不同分辨率的局部特征和全局特征,以及一个尺度间对齐模块(inter-scale alignment)来在双尺度上跨特征执行信息交互,并将局部特征与全局特征进行融合。我们所提出的DSNet在ImageNet-1k上的top-1 accuracy比DeiT-Small精度高出2.4%,并与其他Vision Transformers和ResNet模型相比实现了最先进的性能。在目标检测和实例分割任务上,利用DSNet-Small作为骨干网络的模型,在MSCOCO 2017数据集上的mAP分别比利用ResNet-50作为骨干网络的模型高出6.4%和5.5%,并超过了之前SOTA水平,从而证明了其作为视觉任务通用骨干的潜力。
​ 论文地址:https://arxiv.org/abs/2105.14734v4。
<a name="1.2"></a>
### 1.2 模型细节
网络整体结构如下图所示。
<div align="center">
<img width="700" alt="DSNet" src="https://user-images.githubusercontent.com/71830213/207497958-f9802c03-3eec-4ba5-812f-c6a9158856c1.png">
</div>
​ 受ResNet等网络分stage对网络结构进行设计的启发,我们在DSNet中同样设置了4个stage,对应原图的下采样倍数分别为4、8、16、32。每个stage均采用若干个DS-Block来生成并组合双尺度(即,卷积操作和注意力机制)的特征表示。我们的关键思想是以相对较高的分辨率来表征局部特征以保留细节,而以较低的分辨率(图像大小的$\frac{1}{32}$)[^1]表示全局特征以保留全局图案。具体来说,在每个DS-Block中,我们将输入特征图在通道维度上分成两部分。其中一部分用于提取局部特征${f}_ {l}$,另一部分用于汇总全局特征${f}_ {g}$。其中每个stage的所有DS-Block中保持${f}_ {g}$的大小不变。接下来对DS-Block中的尺度内传播模块和尺度间对齐模块进行介绍。
<a name="1.2.1"></a>
#### 1.2.1 尺度内传播模块
​ 对于高分辨率的${f}_ {l}$,我们采用${3} \times {3}$的depth-wise卷积来提取局部特征,从而得到特征图${f}_ {L}$,即
$$
{f}_ {L} = \sum^{M, N}_ {m, n} W \left( m, n \right) \odot {f}_ {l} \left( i+m, j+n \right),
$$
其中$W ( m, n ), ( m, n ) \in \{ -1, 0, 1 \}$表示卷积核,$\odot$表示元素级相乘。而对于低分辨率的${f}_ {g}$,我们首先将其展平为长度为${l}_ {g}$的序列,从而将序列中每一个向量视为一个visual token,然后通过self-attention机制得到特征图为
$$
{f}_ {G} = \text{softmax} \left( \frac{{f}_ {Q} {f}_ {K}^{T}}{\sqrt{d}} \right) {f}_ {V},
$$
其中${f}_ {Q} = {W}_ {Q} {f}_ {g}, {f}_ {K} = {W}_ {K} {f}_ {g}, {f}_ {V} = {W}_ {V} {f}_ {g}$。这样的双流架构在两条路径中解耦了细粒度和全局的特征,显著消除了训练过程中的两者的冲突,最大限度地发挥局部和全局特征的优势。
<a name="1.2.2"></a>
#### 1.2.2 尺度间对齐模块
​ 双尺度表示的适当融合对于DSNet的成功至关重要,因为它们捕捉了一幅图像的两个不同视角。为了解决这个问题,我们提出了一种新的基于co-attention的尺度间对齐模块。该模块旨在捕获每个局部-全局token对之间的相互关联,并以可学习和动态的方式双向传播信息,从而促使局部特征自适应地探索它们与全局信息的关系,从而使它们更具代表性和信息性,反之亦然。具体地,对于尺度内传播模块计算所得的${f}_ {L}, {f}_ {G}$,我们分别计算它们对应的query、key和value为:
$$
{Q}_ {L} = {f}_ {L} {W}_ {Q}^{l}, {K}_ {L} = {f}_ {L} {W}_ {K}^{l}, {V}_ {L} = {f}_ {L} {W}_ {V}^{l},\\
{Q}_ {G} = {f}_ {G} {W}_ {Q}^{g}, {K}_ {G} = {f}_ {G} {W}_ {K}^{g}, {V}_ {G} = {f}_ {G} {W}_ {V}^{g},
$$
从而计算得到从全局特征到局部特征、从局部特征到全局特征的注意力权重为:
$$
{W}_ { G \rightarrow L} = \text{softmax} \left( \frac{ {Q}_ {L} {K}_ {G}^{T} }{ \sqrt{d} } \right), {W}_ { L \rightarrow G } = \text{softmax} \left( \frac{ {Q}_ {G} {K}_ {L}^{T} }{ \sqrt{d} } \right)
$$
从而得到混合的特征为:
$$
{h}_ {L} = {W}_ { G \rightarrow L } {V}_ {G}, {h}_ {G} = {W}_ { L \rightarrow G } {V}_ {L},
$$
这种双向信息流能够识别本地和全局token之间的跨尺度关系,通过这种关系,双尺度特征高度对齐并相互耦合。在这之后,我们对低分辨率表示${h}_ {G}$进行上采样,将其与高分辨率${h}_ {L}$拼接起来,并执行${1} \times {1}$卷积,以实现信道级双尺度信息融合。
<a name="1.3"></a>
### 1.3 实验结果
| Models | Top1 | Top5 | Reference<br>top1 | Reference<br>top5 | FLOPs<br>(G) | Params<br>(M) |
| :---------: | :---: | :---: | :---------------: | :---------------: | :----------: | :-----------: |
| DSNet-tiny | 0.792 | 0.948 | 0.790 | - | 1.8 | 10.5 |
| DSNet-small | 0.814 | 0.954 | 0.823 | - | 3.5 | 23.0 |
| DSNet-base | 0.818 | 0.952 | 0.831 | - | 8.4 | 49.3 |
<a name="2"></a>
## 2. 模型快速体验
安装 paddlepaddle 和 paddleclas 即可快速对图片进行预测,体验方法可以参考[ResNet50 模型快速体验](./ResNet.md#2-模型快速体验)
<a name="3"></a>
## 3. 模型训练、评估和预测
此部分内容包括训练环境配置、ImageNet数据的准备、该模型在 ImageNet 上的训练、评估、预测等内容。在 `ppcls/configs/ImageNet/DSNet/` 中提供了该模型的训练配置,启动训练方法可以参考:[ResNet50 模型训练、评估和预测](./ResNet.md#3-模型训练评估和预测)
**备注:** 由于 DSNet 系列模型默认使用的 GPU 数量为 8 个,所以在训练时,需要指定8个GPU,如`python3 -m paddle.distributed.launch --gpus="0,1,2,3,4,5,6,7" tools/train.py -c xxx.yaml`, 如果使用 4 个 GPU 训练,默认学习率需要减小一半,精度可能有损。
<a name="4"></a>
## 4. 模型推理部署
<a name="4.1"></a>
### 4.1 推理模型准备
Paddle Inference 是飞桨的原生推理库, 作用于服务器端和云端,提供高性能的推理能力。相比于直接基于预训练模型进行预测,Paddle Inference可使用MKLDNN、CUDNN、TensorRT 进行预测加速,从而实现更优的推理性能。更多关于Paddle Inference推理引擎的介绍,可以参考[Paddle Inference官网教程](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/infer/inference/inference_cn.html)
Inference 的获取可以参考 [ResNet50 推理模型准备](./ResNet.md#41-推理模型准备)
<a name="4.2"></a>
### 4.2 基于 Python 预测引擎推理
PaddleClas 提供了基于 python 预测引擎推理的示例。您可以参考[ResNet50 基于 Python 预测引擎推理](./ResNet.md#42-基于-python-预测引擎推理)
<a name="4.3"></a>
### 4.3 基于 C++ 预测引擎推理
PaddleClas 提供了基于 C++ 预测引擎推理的示例,您可以参考[服务器端 C++ 预测](../../deployment/image_classification/cpp/linux.md)来完成相应的推理部署。如果您使用的是 Windows 平台,可以参考[基于 Visual Studio 2019 Community CMake 编译指南](../../deployment/image_classification/cpp/windows.md)完成相应的预测库编译和模型预测工作。
<a name="4.4"></a>
### 4.4 服务化部署
Paddle Serving 提供高性能、灵活易用的工业级在线推理服务。Paddle Serving 支持 RESTful、gRPC、bRPC 等多种协议,提供多种异构硬件和多种操作系统环境下推理解决方案。更多关于Paddle Serving 的介绍,可以参考[Paddle Serving 代码仓库](https://github.com/PaddlePaddle/Serving)
PaddleClas 提供了基于 Paddle Serving 来完成模型服务化部署的示例,您可以参考[模型服务化部署](../../deployment/image_classification/paddle_serving.md)来完成相应的部署工作。
<a name="4.5"></a>
### 4.5 端侧部署
Paddle Lite 是一个高性能、轻量级、灵活性强且易于扩展的深度学习推理框架,定位于支持包括移动端、嵌入式以及服务器端在内的多硬件平台。更多关于 Paddle Lite 的介绍,可以参考[Paddle Lite 代码仓库](https://github.com/PaddlePaddle/Paddle-Lite)
PaddleClas 提供了基于 Paddle Lite 来完成模型端侧部署的示例,您可以参考[端侧部署](../../deployment/image_classification/paddle_lite.md)来完成相应的部署工作。
<a name="4.6"></a>
### 4.6 Paddle2ONNX 模型转换与预测
Paddle2ONNX 支持将 PaddlePaddle 模型格式转化到 ONNX 模型格式。通过 ONNX 可以完成将 Paddle 模型到多种推理引擎的部署,包括TensorRT/OpenVINO/MNN/TNN/NCNN,以及其它对 ONNX 开源格式进行支持的推理引擎或硬件。更多关于 Paddle2ONNX 的介绍,可以参考[Paddle2ONNX 代码仓库](https://github.com/PaddlePaddle/Paddle2ONNX)
PaddleClas 提供了基于 Paddle2ONNX 来完成 inference 模型转换 ONNX 模型并作推理预测的示例,您可以参考[Paddle2ONNX 模型转换与预测](../../deployment/image_classification/paddle2onnx.md)来完成相应的部署工作。
<a name="5"></a>
## 5. 引用
如果你的论文用到了 DSNet 的方法,请添加如下 cite:
```
@article{mao2021dual,
title={Dual-stream network for visual recognition},
author={Mao, Mingyuan and Zhang, Renrui and Zheng, Honghui and Ma, Teli and Peng, Yan and Ding, Errui and Zhang, Baochang and Han, Shumin and others},
journal={Advances in Neural Information Processing Systems},
volume={34},
pages={25346--25358},
year={2021}
}
```
[^1]:若公式无法正常显示,请打开谷歌浏览器前往[此链接](https://chrome.google.com/webstore/detail/mathjax-plugin-for-github/ioemnmodlmafdkllaclgeombjnmnbima/related)安装MathJax插件,安装完毕后用谷歌浏览器重新打开此页面并刷新即可。
......@@ -51,6 +51,7 @@
- [TNT 系列](#TNT)
- [NextViT 系列](#NextViT)
- [UniFormer 系列](#UniFormer)
- [DSNet 系列](#DSNet)
- [4.2 轻量级模型](#Transformer_lite)
- [MobileViT 系列](#MobileViT)
- [五、参考文献](#reference)
......@@ -762,6 +763,17 @@ DeiT(Data-efficient Image Transformers)系列模型的精度、速度指标
| UniFormer_base | 0.8376 | 0.9672 | - | - |- | 7.77 | 49.78 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/UniFormer_base_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/UniFormer_base_infer.tar) |
| UniFormer_base_ls | 0.8398 | 0.9675 | - | - | - | 7.77 | 49.78 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/UniFormer_base_ls_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/UniFormer_base_ls_infer.tar) |
<a name="DSNet"></a>
## DSNet 系列 <sup>[[49](#ref49)]</sup>
关于 DSNet 系列模型的精度、速度指标如下表所示,更多介绍可以参考:[DSNet 系列模型文档](DSNet.md)
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | time(ms)<br/>bs=8 | FLOPs(G) | Params(M) | 预训练模型下载地址 | inference模型下载地址 |
| ----------- | --------- | --------- | ---------------- | ---------------- | ----------------- | -------- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| DSNet_tiny | 0.7919 | 0.9476 | - | - | - | 1.8 | 10.5 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_tiny_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/DSNet_tiny_infer.tar) |
| DSNet_small | 0.8137 | 0.9544 | - | - | - | 3.5 | 23.0 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_small_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/DSNet_small_infer.tar) |
| DSNet_base | 0.8175 | 0.9522 | - | - | - | 8.4 | 49.3 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_base_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/DSNet_base_infer.tar) |
<a name="Transformer_lite"></a>
......@@ -879,3 +891,5 @@ TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE.
<a name="ref47">[47]</a>Jiashi Li, Xin Xia, Wei Li, Huixia Li, Xing Wang, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan. Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios.
<a name="ref48">[48]</a>Kunchang Li, Yali Wang, Junhao Zhang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, Yu Qiao. UniFormer: Unifying Convolution and Self-attention for Visual Recognition
<a name="ref49">[49]</a>Mingyuan Mao, Renrui Zhang, Honghui Zheng, Peng Gao, Teli Ma, Yan Peng, Errui Ding, Baochang Zhang, Shumin Han. Dual-stream Network for Visual Recognition.
......@@ -35,6 +35,7 @@ from .model_zoo.se_resnet_vd import SE_ResNet18_vd, SE_ResNet34_vd, SE_ResNet50_
from .model_zoo.se_resnext_vd import SE_ResNeXt50_vd_32x4d, SE_ResNeXt50_vd_32x4d, SENet154_vd
from .model_zoo.se_resnext import SE_ResNeXt50_32x4d, SE_ResNeXt101_32x4d, SE_ResNeXt152_64x4d
from .model_zoo.dpn import DPN68, DPN92, DPN98, DPN107, DPN131
from .model_zoo.dsnet import DSNet_tiny_patch16_224, DSNet_small_patch16_224, DSNet_base_patch16_224
from .model_zoo.densenet import DenseNet121, DenseNet161, DenseNet169, DenseNet201, DenseNet264
from .model_zoo.efficientnet import EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, EfficientNetB4, EfficientNetB5, EfficientNetB6, EfficientNetB7, EfficientNetB0_small
from .model_zoo.resnest import ResNeSt50_fast_1s1x64d, ResNeSt50, ResNeSt101, ResNeSt200, ResNeSt269
......
此差异已折叠。
# global configs
Global:
checkpoints: null
pretrained_model: null
output_dir: ./output/
device: gpu
save_interval: 1
eval_during_train: True
eval_interval: 1
epochs: 300
print_batch_step: 10
use_visualdl: False
# used for static mode and model export
image_shape: [3, 224, 224]
save_inference_dir: ./inference
# training model under @to_static
to_static: False
# model architecture
Arch:
name: DSNet_base_patch16_224
class_num: 1000
# loss function config for traing/eval process
Loss:
Train:
- CELoss:
weight: 1.0
epsilon: 0.1
Eval:
- CELoss:
weight: 1.0
Optimizer:
name: AdamW
beta1: 0.9
beta2: 0.999
epsilon: 1e-8
weight_decay: 0.05
no_weight_decay_name: norm cls_token pos_embed dist_token
one_dim_param_no_weight_decay: True
lr:
name: Cosine
learning_rate: 1e-3
eta_min: 1e-5
warmup_epoch: 5
warmup_start_lr: 1e-6
# data loader for train and eval
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/train_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
interpolation: bicubic
backend: pil
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
batch_transform_ops:
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: True
loader:
num_workers: 4
use_shared_memory: True
Eval:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/val_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: False
loader:
num_workers: 4
use_shared_memory: True
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 10
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 5
class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
Metric:
Eval:
- TopkAcc:
topk: [1, 5]
# global configs
Global:
checkpoints: null
pretrained_model: null
output_dir: ./output/
device: gpu
save_interval: 1
eval_during_train: True
eval_interval: 1
epochs: 300
print_batch_step: 10
use_visualdl: False
# used for static mode and model export
image_shape: [3, 224, 224]
save_inference_dir: ./inference
# training model under @to_static
to_static: False
# model architecture
Arch:
name: DSNet_small_patch16_224
class_num: 1000
# loss function config for traing/eval process
Loss:
Train:
- CELoss:
weight: 1.0
epsilon: 0.1
Eval:
- CELoss:
weight: 1.0
Optimizer:
name: AdamW
beta1: 0.9
beta2: 0.999
epsilon: 1e-8
weight_decay: 0.05
no_weight_decay_name: norm cls_token pos_embed dist_token
one_dim_param_no_weight_decay: True
lr:
name: Cosine
learning_rate: 1e-3
eta_min: 1e-5
warmup_epoch: 5
warmup_start_lr: 1e-6
# data loader for train and eval
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/train_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
interpolation: bicubic
backend: pil
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
batch_transform_ops:
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: True
loader:
num_workers: 4
use_shared_memory: True
Eval:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/val_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: False
loader:
num_workers: 4
use_shared_memory: True
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 10
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 5
class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
Metric:
Eval:
- TopkAcc:
topk: [1, 5]
# global configs
Global:
checkpoints: null
pretrained_model: null
output_dir: ./output/
device: gpu
save_interval: 1
eval_during_train: True
eval_interval: 1
epochs: 300
print_batch_step: 10
use_visualdl: False
# used for static mode and model export
image_shape: [3, 224, 224]
save_inference_dir: ./inference
# training model under @to_static
to_static: False
# model architecture
Arch:
name: DSNet_tiny_patch16_224
class_num: 1000
# loss function config for traing/eval process
Loss:
Train:
- CELoss:
weight: 1.0
epsilon: 0.1
Eval:
- CELoss:
weight: 1.0
Optimizer:
name: AdamW
beta1: 0.9
beta2: 0.999
epsilon: 1e-8
weight_decay: 0.05
no_weight_decay_name: norm cls_token pos_embed dist_token
one_dim_param_no_weight_decay: True
lr:
name: Cosine
learning_rate: 1e-3
eta_min: 1e-5
warmup_epoch: 5
warmup_start_lr: 1e-6
# data loader for train and eval
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/train_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
interpolation: bicubic
backend: pil
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
batch_transform_ops:
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: True
loader:
num_workers: 8
use_shared_memory: True
Eval:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/val_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: False
loader:
num_workers: 4
use_shared_memory: True
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 10
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 5
class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
Metric:
Eval:
- TopkAcc:
topk: [1, 5]
===========================train_params===========================
model_name:DSNet_base_patch16_224
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
-o Global.auto_cast:null
-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120
-o Global.output_dir:./output/
-o DataLoader.Train.sampler.batch_size:8
-o Global.pretrained_model:null
train_model_name:latest
train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
norm_train:tools/train.py -c ppcls/configs/ImageNet/DSNet/DSNet_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 -o Arch.pretrained=False
pact_train:null
fpgm_train:null
distill_train:null
to_static_train:-o Global.to_static=True
null:null
##
===========================eval_params===========================
eval:tools/eval.py -c ppcls/configs/ImageNet/DSNet/DSNet_base_patch16_224.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
norm_export:tools/export_model.py -c ppcls/configs/ImageNet/DSNet/DSNet_base_patch16_224.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
pretrained_model_url:null
infer_model:../inference/
infer_export:True
infer_quant:False
inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=248
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:False
-o Global.cpu_num_threads:1
-o Global.batch_size:1
-o Global.use_tensorrt:False
-o Global.use_fp16:False
-o Global.inference_model_dir:../inference
-o Global.infer_imgs:../dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
-o Global.save_log_path:null
-o Global.benchmark:False
null:null
null:null
===========================train_benchmark_params==========================
batch_size:128
fp_items:fp32
epoch:1
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
===========================infer_benchmark_params==========================
random_infer_input:[{float32,[3,224,224]}]
===========================train_params===========================
model_name:DSNet_small_patch16_224
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
-o Global.auto_cast:null
-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120
-o Global.output_dir:./output/
-o DataLoader.Train.sampler.batch_size:8
-o Global.pretrained_model:null
train_model_name:latest
train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
norm_train:tools/train.py -c ppcls/configs/ImageNet/DSNet/DSNet_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 -o Arch.pretrained=False
pact_train:null
fpgm_train:null
distill_train:null
to_static_train:-o Global.to_static=True
null:null
##
===========================eval_params===========================
eval:tools/eval.py -c ppcls/configs/ImageNet/DSNet/DSNet_small_patch16_224.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
norm_export:tools/export_model.py -c ppcls/configs/ImageNet/DSNet/DSNet_small_patch16_224.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
pretrained_model_url:null
infer_model:../inference/
infer_export:True
infer_quant:False
inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=248
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:False
-o Global.cpu_num_threads:1
-o Global.batch_size:1
-o Global.use_tensorrt:False
-o Global.use_fp16:False
-o Global.inference_model_dir:../inference
-o Global.infer_imgs:../dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
-o Global.save_log_path:null
-o Global.benchmark:False
null:null
null:null
===========================train_benchmark_params==========================
batch_size:128
fp_items:fp32
epoch:1
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
===========================infer_benchmark_params==========================
random_infer_input:[{float32,[3,224,224]}]
===========================train_params===========================
model_name:DSNet_tiny_patch16_224
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
-o Global.auto_cast:null
-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120
-o Global.output_dir:./output/
-o DataLoader.Train.sampler.batch_size:8
-o Global.pretrained_model:null
train_model_name:latest
train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
norm_train:tools/train.py -c ppcls/configs/ImageNet/DSNet/DSNet_tiny_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 -o Arch.pretrained=False
pact_train:null
fpgm_train:null
distill_train:null
to_static_train:-o Global.to_static=True
null:null
##
===========================eval_params===========================
eval:tools/eval.py -c ppcls/configs/ImageNet/DSNet/DSNet_tiny_patch16_224.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
norm_export:tools/export_model.py -c ppcls/configs/ImageNet/DSNet/DSNet_tiny_patch16_224.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
pretrained_model_url:null
infer_model:../inference/
infer_export:True
infer_quant:False
inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=248
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:False
-o Global.cpu_num_threads:1
-o Global.batch_size:1
-o Global.use_tensorrt:False
-o Global.use_fp16:False
-o Global.inference_model_dir:../inference
-o Global.infer_imgs:../dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
-o Global.save_log_path:null
-o Global.benchmark:False
null:null
null:null
===========================train_benchmark_params==========================
batch_size:128
fp_items:fp32
epoch:1
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
===========================infer_benchmark_params==========================
random_infer_input:[{float32,[3,224,224]}]
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册