diff --git a/deploy/auto_compression/README.md b/deploy/slim/auto_compression/README.md similarity index 97% rename from deploy/auto_compression/README.md rename to deploy/slim/auto_compression/README.md index e31224270e1a0207d6fea34fcfe41de6997b205e..5f02fd94805d6ba75034c2019144c3bf15c41abe 100644 --- a/deploy/auto_compression/README.md +++ b/deploy/slim/auto_compression/README.md @@ -82,7 +82,7 @@ python run.py --save_dir='./save_quant_mobilev3/' --config_path='./configs/mbv3_ **多卡启动** -图像分类训练任务中往往包含大量训练数据,以ImageNet为例,ImageNet22k数据集中包含1400W张图像,如果使用单卡训练,会非常耗时,使用分布式训练可以达到几乎线性的加速比。 +图像分类训练任务中往往包含大量训练数据,以ImageNet-1k为例,如果使用单卡训练,会非常耗时,使用分布式训练可以达到几乎线性的加速比。 ```shell export CUDA_VISIBLE_DEVICES=0,1,2,3 @@ -95,7 +95,7 @@ python -m paddle.distributed.launch run.py --save_dir='./save_quant_mobilev3/' - 加载训练好的模型进行量化训练时,一般`learning rate`可比原始训练的`learning rate`小10倍。 -## 4. 配置文件 +## 4. 配置文件介绍 自动压缩相关配置主要有: - 压缩策略配置,如量化(Quantization),知识蒸馏(Distillation),结构化稀疏(ChannelPrune),ASP半结构化稀疏(ASPPrune ),非结构化稀疏(UnstructurePrune)。 - 训练超参配置(TrainConfig):主要设置学习率、训练次数(epochs)和优化器等。 diff --git a/deploy/auto_compression/mbv3_qat_dis.yaml b/deploy/slim/auto_compression/mbv3_qat_dis.yaml similarity index 97% rename from deploy/auto_compression/mbv3_qat_dis.yaml rename to deploy/slim/auto_compression/mbv3_qat_dis.yaml index 5bdce4333158e4f8a3e345ff95fe8e3aeaae5698..c8b940fba7c8dc545befae6cfbb98a9e4c08cb08 100644 --- a/deploy/auto_compression/mbv3_qat_dis.yaml +++ b/deploy/slim/auto_compression/mbv3_qat_dis.yaml @@ -27,12 +27,12 @@ Quantization: TrainConfig: epochs: 2 - eval_iter: 500 + eval_iter: 5000 learning_rate: 0.001 optimizer_builder: optimizer: type: Momentum - weight_decay: 0.000005 + weight_decay: 0.00005 origin_metric: 0.7532 diff --git a/deploy/auto_compression/run.py b/deploy/slim/auto_compression/run.py similarity index 100% rename from deploy/auto_compression/run.py rename to deploy/slim/auto_compression/run.py