diff --git a/docs/zh_CN/models/ImageNet1k/MobileViTv3.md b/docs/zh_CN/models/ImageNet1k/MobileViTV3.md
similarity index 79%
rename from docs/zh_CN/models/ImageNet1k/MobileViTv3.md
rename to docs/zh_CN/models/ImageNet1k/MobileViTV3.md
index 179a35dcc20b012b0a1d543010f7a488e7772e16..604af7e07d494be1920a2792dea82966df59ec97 100644
--- a/docs/zh_CN/models/ImageNet1k/MobileViTv3.md
+++ b/docs/zh_CN/models/ImageNet1k/MobileViTV3.md
@@ -1,4 +1,4 @@
-# MobileviTv3
+# MobileviTV3
-----
## 目录
@@ -24,8 +24,8 @@
### 1.1 模型简介
-MobileViTv3 是一个结合 CNN 和 ViT 的轻量级模型,用于移动视觉任务。通过 MobileViTv3-block 解决了 MobileViTv1 的扩展问题并简化了学习任务,从而得倒了 MobileViTv3-XXS、XS 和 S 模型,在 ImageNet-1k、ADE20K、COCO 和 PascalVOC2012 数据集上表现优于 MobileViTv1。
-通过将提出的融合块添加到 MobileViTv2 中,创建 MobileViTv3-0.5、0.75 和 1.0 模型,在ImageNet-1k、ADE20K、COCO和PascalVOC2012数据集上给出了比 MobileViTv2 更好的准确性数据。[论文地址](https://arxiv.org/abs/2209.15159)。
+MobileViTV3 是一个结合 CNN 和 ViT 的轻量级模型,用于移动视觉任务。通过 MobileViTV3-block 解决了 MobileViTV1 的扩展问题并简化了学习任务,从而得倒了 MobileViTV3-XXS、XS 和 S 模型,在 ImageNet-1k、ADE20K、COCO 和 PascalVOC2012 数据集上表现优于 MobileViTV1。
+通过将提出的融合块添加到 MobileViTV2 中,创建 MobileViTV3_x0_5、MobileViTV3_x0_75 和 MobileViTV3_x1_0 模型,在ImageNet-1k、ADE20K、COCO和PascalVOC2012数据集上给出了比 MobileViTV2 更好的准确性数据。[论文地址](https://arxiv.org/abs/2209.15159)。
@@ -33,15 +33,15 @@ MobileViTv3 是一个结合 CNN 和 ViT 的轻量级模型,用于移动视觉
| Models | Top1 | Top5 | Reference
top1 | Reference
top5 | FLOPs
(G) | Params
(M) |
|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
-| MobileViTv3_XXS | 0.7087 | 0.8976 | 0.7098 | - | 289.02 | 1.25 |
-| MobileViTv3_XS | 0.7663 | 0.9332 | 0.7671 | - | 926.98 | 2.49 |
-| MobileViTv3_S | 0.7928 | 0.9454 | 0.7930 | - | 1841.39 | 5.76 |
-| MobileViTv3_XXS_L2 | 0.7028 | 0.8942 | 0.7023 | - | 256.97 | 1.15 |
-| MobileViTv3_XS_L2 | 0.7607 | 0.9300 | 0.7610 | - | 852.82 | 2.26 |
-| MobileViTv3_S_L2 | 0.7907 | 0.9440 | 0.7906 | - | 1651.96 | 5.17 |
-| MobileViTv3_x0_5 | 0.7200 | 0.9083 | 0.7233 | - | 481.33 | 1.43 |
-| MobileViTv3_x0_75 | 0.7626 | 0.9308 | 0.7655 | - | 1064.48 | 3.00 |
-| MobileViTv3_x1_0 | 0.7838 | 0.9421 | 0.7864 | - | 1875.96 | 5.14 |
+| MobileViTV3_XXS | 0.7087 | 0.8976 | 0.7098 | - | 289.02 | 1.25 |
+| MobileViTV3_XS | 0.7663 | 0.9332 | 0.7671 | - | 926.98 | 2.49 |
+| MobileViTV3_S | 0.7928 | 0.9454 | 0.7930 | - | 1841.39 | 5.76 |
+| MobileViTV3_XXS_L2 | 0.7028 | 0.8942 | 0.7023 | - | 256.97 | 1.15 |
+| MobileViTV3_XS_L2 | 0.7607 | 0.9300 | 0.7610 | - | 852.82 | 2.26 |
+| MobileViTV3_S_L2 | 0.7907 | 0.9440 | 0.7906 | - | 1651.96 | 5.17 |
+| MobileViTV3_x0_5 | 0.7200 | 0.9083 | 0.7233 | - | 481.33 | 1.43 |
+| MobileViTV3_x0_75 | 0.7626 | 0.9308 | 0.7655 | - | 1064.48 | 3.00 |
+| MobileViTV3_x1_0 | 0.7838 | 0.9421 | 0.7864 | - | 1875.96 | 5.14 |
**备注:** PaddleClas 所提供的该系列模型的预训练模型权重,均是基于其官方提供的权重转得。
@@ -55,7 +55,7 @@ MobileViTv3 是一个结合 CNN 和 ViT 的轻量级模型,用于移动视觉
## 3. 模型训练、评估和预测
-此部分内容包括训练环境配置、ImageNet数据的准备、该模型在 ImageNet 上的训练、评估、预测等内容。在 `ppcls/configs/ImageNet/MobileViTv3/` 中提供了该模型的训练配置,启动训练方法可以参考:[ResNet50 模型训练、评估和预测](./ResNet.md#3-模型训练评估和预测)。
+此部分内容包括训练环境配置、ImageNet数据的准备、该模型在 ImageNet 上的训练、评估、预测等内容。在 `ppcls/configs/ImageNet/MobileViTV3/` 中提供了该模型的训练配置,启动训练方法可以参考:[ResNet50 模型训练、评估和预测](./ResNet.md#3-模型训练评估和预测)。
**备注:** 由于 MobileViT 系列模型默认使用的 GPU 数量为 8 个,所以在训练时,需要指定8个GPU,如`python3 -m paddle.distributed.launch --gpus="0,1,2,3,4,5,6,7" tools/train.py -c xxx.yaml`, 如果使用 4 个 GPU 训练,默认学习率需要减小一半,精度可能有损。
diff --git a/ppcls/arch/backbone/__init__.py b/ppcls/arch/backbone/__init__.py
index 3248541aedad642b170b26695c06f42582f90c70..4acb57beb832e2b295986fad007b5957283405eb 100644
--- a/ppcls/arch/backbone/__init__.py
+++ b/ppcls/arch/backbone/__init__.py
@@ -79,8 +79,8 @@ from .model_zoo.cvt import CvT_13_224, CvT_13_384, CvT_21_224, CvT_21_384, CvT_W
from .model_zoo.micronet import MicroNet_M0, MicroNet_M1, MicroNet_M2, MicroNet_M3
from .model_zoo.mobilenext import MobileNeXt_x0_35, MobileNeXt_x0_5, MobileNeXt_x0_75, MobileNeXt_x1_0, MobileNeXt_x1_4
from .model_zoo.mobilevit_v2 import MobileViTV2_x0_5, MobileViTV2_x0_75, MobileViTV2_x1_0, MobileViTV2_x1_25, MobileViTV2_x1_5, MobileViTV2_x1_75, MobileViTV2_x2_0
-from .model_zoo.mobilevit_v3 import MobileViTv3_XXS, MobileViTv3_XS, MobileViTv3_S, MobileViTv3_XXS_L2, MobileViTv3_XS_L2, MobileViTv3_S_L2, MobileViTv3_x0_5, MobileViTv3_x0_75, MobileViTv3_x1_0
from .model_zoo.tinynet import TinyNet_A, TinyNet_B, TinyNet_C, TinyNet_D, TinyNet_E
+from .model_zoo.mobilevit_v3 import MobileViTV3_XXS, MobileViTV3_XS, MobileViTV3_S, MobileViTV3_XXS_L2, MobileViTV3_XS_L2, MobileViTV3_S_L2, MobileViTV3_x0_5, MobileViTV3_x0_75, MobileViTV3_x1_0
from .variant_models.resnet_variant import ResNet50_last_stage_stride1
from .variant_models.resnet_variant import ResNet50_adaptive_max_pool2d
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S.yaml
index 788278e753a497543e98b7434063d72757fb0d5f..08c81bd7c31b28ac5f776933f7e0573911afb803 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_S
+ name: MobileViTV3_S
class_num: 1000
dropout: 0.1
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S_L2.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S_L2.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S_L2.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S_L2.yaml
index da83341f841796e488df73e18dfe09793506cfe9..544f4e903bdc1da2cb49ee9efd8182e2d5d787a2 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S_L2.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S_L2.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_S_L2
+ name: MobileViTV3_S_L2
class_num: 1000
dropout: 0.1
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS.yaml
index ceb775d16afb6aaa4ff915f03d2887989a0cc84c..21b30619a8ac510cc0413cea79b445f011b9ad5f 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_XS
+ name: MobileViTV3_XS
class_num: 1000
dropout: 0.1
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS_L2.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS_L2.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS_L2.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS_L2.yaml
index 70f4bc324de19accbeed22cdcf956847697da921..44cd42da734dc11a18cabbafeccba5fada015c1a 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS_L2.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS_L2.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_XS_L2
+ name: MobileViTV3_XS_L2
class_num: 1000
dropout: 0.1
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS.yaml
index 271b2bbcd4b4ddf4fbec82e80307e5db15624254..bbf1c7ace450afd46980d835fb3f139ba52a6a73 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_XXS
+ name: MobileViTV3_XXS
class_num: 1000
dropout: 0.05
diff --git a/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS_L2.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS_L2.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..840afc5d59bb0b4c3b47ace81e10d28d731e7dc1
--- /dev/null
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS_L2.yaml
@@ -0,0 +1,150 @@
+# global configs
+Global:
+ checkpoints: null
+ pretrained_model: null
+ output_dir: ./output/
+ device: gpu
+ save_interval: 1
+ eval_during_train: True
+ eval_interval: 1
+ epochs: 300
+ print_batch_step: 10
+ use_visualdl: False
+ # used for static mode and model export
+ image_shape: [3, 256, 256]
+ save_inference_dir: ./inference
+ use_dali: False
+
+# mixed precision training
+AMP:
+ scale_loss: 65536
+ use_dynamic_loss_scaling: True
+ # O1: mixed fp16
+ level: O1
+
+# model ema
+EMA:
+ decay: 0.9995
+
+# model architecture
+Arch:
+ name: MobileViTV3_XXS_L2
+ class_num: 1000
+ dropout: 0.1
+
+# loss function config for traing/eval process
+Loss:
+ Train:
+ - CELoss:
+ weight: 1.0
+ epsilon: 0.1
+ Eval:
+ - CELoss:
+ weight: 1.0
+
+Optimizer:
+ name: AdamW
+ beta1: 0.9
+ beta2: 0.999
+ epsilon: 1e-8
+ weight_decay: 0.01
+ lr:
+ name: Cosine
+ learning_rate: 0.002 # for total batch size 384
+ eta_min: 0.0002
+ warmup_epoch: 1 # 3000 iterations
+ warmup_start_lr: 0.0002
+
+# data loader for train and eval
+DataLoader:
+ Train:
+ dataset:
+ name: MultiScaleDataset
+ image_root: ./dataset/ILSVRC2012/
+ cls_label_path: ./dataset/ILSVRC2012/train_list.txt
+ transform_ops:
+ - DecodeImage:
+ to_rgb: True
+ channel_first: False
+ - RandCropImage:
+ size: 256
+ interpolation: bilinear
+ use_log_aspect: True
+ - RandFlipImage:
+ flip_code: 1
+ - NormalizeImage:
+ scale: 1.0/255.0
+ mean: [0.0, 0.0, 0.0]
+ std: [1.0, 1.0, 1.0]
+ order: ''
+ # support to specify width and height respectively:
+ # scales: [(256,256) (160,160), (192,192), (224,224) (288,288) (320,320)]
+ sampler:
+ name: MultiScaleSampler
+ scales: [256, 160, 192, 224, 288, 320]
+ # first_bs: batch size for the first image resolution in the scales list
+ # divide_factor: to ensure the width and height dimensions can be devided by downsampling multiple
+ first_bs: 48
+ divided_factor: 32
+ is_training: True
+ loader:
+ num_workers: 4
+ use_shared_memory: True
+ Eval:
+ dataset:
+ name: ImageNetDataset
+ image_root: ./dataset/ILSVRC2012/
+ cls_label_path: ./dataset/ILSVRC2012/val_list.txt
+ transform_ops:
+ - DecodeImage:
+ to_rgb: True
+ channel_first: False
+ - ResizeImage:
+ resize_short: 288
+ interpolation: bilinear
+ - CropImage:
+ size: 256
+ - NormalizeImage:
+ scale: 1.0/255.0
+ mean: [0.0, 0.0, 0.0]
+ std: [1.0, 1.0, 1.0]
+ order: ''
+ sampler:
+ name: DistributedBatchSampler
+ batch_size: 48
+ drop_last: False
+ shuffle: False
+ loader:
+ num_workers: 4
+ use_shared_memory: True
+
+Infer:
+ infer_imgs: docs/images/inference_deployment/whl_demo.jpg
+ batch_size: 10
+ transforms:
+ - DecodeImage:
+ to_rgb: True
+ channel_first: False
+ - ResizeImage:
+ resize_short: 288
+ interpolation: bilinear
+ - CropImage:
+ size: 256
+ - NormalizeImage:
+ scale: 1.0/255.0
+ mean: [0.0, 0.0, 0.0]
+ std: [1.0, 1.0, 1.0]
+ order: ''
+ - ToCHWImage:
+ PostProcess:
+ name: Topk
+ topk: 5
+ class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
+
+Metric:
+ Train:
+ - TopkAcc:
+ topk: [1, 5]
+ Eval:
+ - TopkAcc:
+ topk: [1, 5]
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_5.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_5.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_5.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_5.yaml
index fbdc5652fb5d43999b6e26d2a2f8d29dc754e85b..99544b39cbbcebe5edb94533647e865af150c8e2 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_5.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_5.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_x0_5
+ name: MobileViTV3_x0_5
class_num: 1000
classifier_dropout: 0.
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_75.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_75.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_75.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_75.yaml
index 167b1059247585db240af3f1a5a2b3cf5dd17292..405e8f6a21ab291770e5776493e84ba2bd3e1b74 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_75.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_75.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_x0_75
+ name: MobileViTV3_x0_75
class_num: 1000
classifier_dropout: 0.
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x1_0.yaml b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x1_0.yaml
similarity index 99%
rename from ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x1_0.yaml
rename to ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x1_0.yaml
index e4c4cc9e694e03ac04fd70488b75948d50db9fa7..a1b101e4cc86c1a3b9017edc23fd48c503d0ca16 100644
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x1_0.yaml
+++ b/ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x1_0.yaml
@@ -28,7 +28,7 @@ EMA:
# model architecture
Arch:
- name: MobileViTv3_x1_0
+ name: MobileViTV3_x1_0
class_num: 1000
classifier_dropout: 0.
diff --git a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS_L2.yaml b/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS_L2.yaml
deleted file mode 100644
index d01b13646ae24cf59f5c4c4f1880d5c7db1bb2fb..0000000000000000000000000000000000000000
--- a/ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS_L2.yaml
+++ /dev/null
@@ -1,150 +0,0 @@
-# global configs
-Global:
- checkpoints: null
- pretrained_model: null
- output_dir: ./output/
- device: gpu
- save_interval: 1
- eval_during_train: True
- eval_interval: 1
- epochs: 300
- print_batch_step: 10
- use_visualdl: False
- # used for static mode and model export
- image_shape: [3, 256, 256]
- save_inference_dir: ./inference
- use_dali: False
-
-# mixed precision training
-AMP:
- scale_loss: 65536
- use_dynamic_loss_scaling: True
- # O1: mixed fp16
- level: O1
-
-# model ema
-EMA:
- decay: 0.9995
-
-# model architecture
-Arch:
- name: MobileViTv3_XXS_L2
- class_num: 1000
- dropout: 0.1
-
-# loss function config for traing/eval process
-Loss:
- Train:
- - CELoss:
- weight: 1.0
- epsilon: 0.1
- Eval:
- - CELoss:
- weight: 1.0
-
-Optimizer:
- name: AdamW
- beta1: 0.9
- beta2: 0.999
- epsilon: 1e-8
- weight_decay: 0.01
- lr:
- name: Cosine
- learning_rate: 0.002 # for total batch size 384
- eta_min: 0.0002
- warmup_epoch: 1 # 3000 iterations
- warmup_start_lr: 0.0002
-
-# data loader for train and eval
-DataLoader:
- Train:
- dataset:
- name: MultiScaleDataset
- image_root: ./dataset/ILSVRC2012/
- cls_label_path: ./dataset/ILSVRC2012/train_list.txt
- transform_ops:
- - DecodeImage:
- to_rgb: True
- channel_first: False
- - RandCropImage:
- size: 256
- interpolation: bilinear
- use_log_aspect: True
- - RandFlipImage:
- flip_code: 1
- - NormalizeImage:
- scale: 1.0/255.0
- mean: [0.0, 0.0, 0.0]
- std: [1.0, 1.0, 1.0]
- order: ''
- # support to specify width and height respectively:
- # scales: [(256,256) (160,160), (192,192), (224,224) (288,288) (320,320)]
- sampler:
- name: MultiScaleSampler
- scales: [256, 160, 192, 224, 288, 320]
- # first_bs: batch size for the first image resolution in the scales list
- # divide_factor: to ensure the width and height dimensions can be devided by downsampling multiple
- first_bs: 48
- divided_factor: 32
- is_training: True
- loader:
- num_workers: 4
- use_shared_memory: True
- Eval:
- dataset:
- name: ImageNetDataset
- image_root: ./dataset/ILSVRC2012/
- cls_label_path: ./dataset/ILSVRC2012/val_list.txt
- transform_ops:
- - DecodeImage:
- to_rgb: True
- channel_first: False
- - ResizeImage:
- resize_short: 288
- interpolation: bilinear
- - CropImage:
- size: 256
- - NormalizeImage:
- scale: 1.0/255.0
- mean: [0.0, 0.0, 0.0]
- std: [1.0, 1.0, 1.0]
- order: ''
- sampler:
- name: DistributedBatchSampler
- batch_size: 48
- drop_last: False
- shuffle: False
- loader:
- num_workers: 4
- use_shared_memory: True
-
-Infer:
- infer_imgs: docs/images/inference_deployment/whl_demo.jpg
- batch_size: 10
- transforms:
- - DecodeImage:
- to_rgb: True
- channel_first: False
- - ResizeImage:
- resize_short: 288
- interpolation: bilinear
- - CropImage:
- size: 256
- - NormalizeImage:
- scale: 1.0/255.0
- mean: [0.0, 0.0, 0.0]
- std: [1.0, 1.0, 1.0]
- order: ''
- - ToCHWImage:
- PostProcess:
- name: Topk
- topk: 5
- class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
-
-Metric:
- Train:
- - TopkAcc:
- topk: [1, 5]
- Eval:
- - TopkAcc:
- topk: [1, 5]
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_S_L2_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_S_L2_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_S_L2_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_S_L2_train_infer_python.txt
index fb11e8706f16cc5da96e281b41b0789390afcea7..c2b31a0ae660412f2b151d9840cfe3d276d73b19 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_S_L2_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_S_L2_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_S_L2
+model_name:MobileViTV3_S_L2
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S_L2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S_L2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S_L2.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S_L2.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S_L2.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S_L2.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_S_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_S_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_S_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_S_train_infer_python.txt
index 4268bb76a0e5bc2748145b13d2bdb168abda0b0c..9d15dc2c80deadc16bd7b3a34f0fdace4cecfbfd 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_S_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_S_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_S
+model_name:MobileViTV3_S
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_S.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_S.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_XS_L2_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_XS_L2_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_XS_L2_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_XS_L2_train_infer_python.txt
index c5b219476e05ed76f6f6bf393b5c59ceafa3b1be..0ebb5890cbbe972807619a2ef8a997bba8ffc6aa 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_XS_L2_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_XS_L2_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_XS_L2
+model_name:MobileViTV3_XS_L2
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS_L2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS_L2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS_L2.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS_L2.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS_L2.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS_L2.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_XS_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_XS_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_XS_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_XS_train_infer_python.txt
index 78756c6694e92b5a7b0d53049ce3661f60207b66..fc0373741d60937a7bbb715c4b25315f2571c1c4 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_XS_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_XS_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_XS
+model_name:MobileViTV3_XS
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XS.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XS.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_XXS_L2_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_XXS_L2_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_XXS_L2_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_XXS_L2_train_infer_python.txt
index d38a431ef95a0f410f74d43f8309ac7e1e5c4c50..62a40aa454b03aa1bfa5f20bf6364b5f8c84ab9c 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_XXS_L2_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_XXS_L2_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_XXS_L2
+model_name:MobileViTV3_XXS_L2
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS_L2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS_L2.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS_L2.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS_L2.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS_L2.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS_L2.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_XXS_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_XXS_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_XXS_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_XXS_train_infer_python.txt
index 40b809634b63f248dcb614ac664a670e8daebba1..b9d3d99ab8c565db5120892272dc83d7b5acf349 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_XXS_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_XXS_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_XXS
+model_name:MobileViTV3_XXS
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_XXS.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_XXS.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_x0_5_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_x0_5_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_x0_5_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_x0_5_train_infer_python.txt
index 5b738387078f8dca330ef901b4812bf14cccc224..8013de52c832885536fa9c4cc6cea653d7165b29 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_x0_5_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_x0_5_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_x0_5
+model_name:MobileViTV3_x0_5
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_5.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_5.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_5.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_5.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_5.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_x0_75_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_x0_75_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_x0_75_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_x0_75_train_infer_python.txt
index 51a4dbe6c0e4324d4b4cef3eaedbd1619ae1e99b..2645a161f62ba9d418593d53fc79439772f6f1c0 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_x0_75_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_x0_75_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_x0_75
+model_name:MobileViTV3_x0_75
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_75.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_75.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_75.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x0_75.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x0_75.yaml
quant_export:null
fpgm_export:null
distill_export:null
diff --git a/test_tipc/configs/MobileViTv3/MobileViTv3_x1_0_train_infer_python.txt b/test_tipc/configs/MobileViTv3/MobileViTV3_x1_0_train_infer_python.txt
similarity index 89%
rename from test_tipc/configs/MobileViTv3/MobileViTv3_x1_0_train_infer_python.txt
rename to test_tipc/configs/MobileViTv3/MobileViTV3_x1_0_train_infer_python.txt
index 950c374b68a6972cda10756ce478f2c3b771e1d7..19403ab0dd3bba83474522ee03f8cde5a34454ac 100644
--- a/test_tipc/configs/MobileViTv3/MobileViTv3_x1_0_train_infer_python.txt
+++ b/test_tipc/configs/MobileViTv3/MobileViTV3_x1_0_train_infer_python.txt
@@ -1,5 +1,5 @@
===========================train_params===========================
-model_name:MobileViTv3_x1_0
+model_name:MobileViTV3_x1_0
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
@@ -13,7 +13,7 @@ train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
-norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
+norm_train:tools/train.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x1_0.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2
pact_train:null
fpgm_train:null
distill_train:null
@@ -21,13 +21,13 @@ null:null
null:null
##
===========================eval_params===========================
-eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x1_0.yaml
+eval:tools/eval.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x1_0.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
-norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTv3/MobileViTv3_x1_0.yaml
+norm_export:tools/export_model.py -c ppcls/configs/ImageNet/MobileViTV3/MobileViTV3_x1_0.yaml
quant_export:null
fpgm_export:null
distill_export:null