未验证 提交 0c495767 编写于 作者: Z zhouzj 提交者: GitHub

Update quantization accuracy of mobilenetV3 (#1302)

* update mbv3.

* update mbv3.

* update mbv3.

* fix comments
上级 31cfaf7b
......@@ -42,7 +42,9 @@
| GhostNet_x1_0 | Baseline | 74.02 | 2.93 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/GhostNet_x1_0_infer.tar) |
| GhostNet_x1_0 | 量化+蒸馏 | 72.62 | 1.03 | - | [Config](./config/GhostNet_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/GhostNet_x1_0_QAT.tar) |
| MobileNetV3_large_x1_0 | Baseline | 75.32 | - | 16.62 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar) |
| MobileNetV3_large_x1_0 | 量化+蒸馏 | 70.93 | - | 9.85 | [Config](./config/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_QAT.tar) |
| MobileNetV3_large_x1_0 | 量化+蒸馏 | 74.41 | - | 9.85 | [Config](./config/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_QAT.tar) |
| MobileNetV3_large_x1_0_ssld | Baseline | 78.96 | - | 16.62 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_ssld_infer.tar) |
| MobileNetV3_large_x1_0_ssld | 量化+蒸馏 | 77.17 | - | 9.85 | [Config](./config/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_ssld_QAT.tar) |
- ARM CPU 测试环境:`SDM865(4xA77+4xA55)`
- Nvidia GPU 测试环境:
......
Global:
input_name: inputs
model_dir: MobileNetV3_large_x1_0_infer
model_dir: MobileNetV3_large_x1_0_ssld_infer
model_filename: inference.pdmodel
params_filename: inference.pdiparams
batch_size: 32
batch_size: 128
data_dir: ./ILSVRC2012
Distillation:
alpha: 1.0
loss: l2
node:
- softmax_0.tmp_0
loss: soft_label
Quantization:
use_pact: true
activation_bits: 8
......@@ -22,15 +20,14 @@ Quantization:
quantize_op_types:
- conv2d
- depthwise_conv2d
- matmul
weight_bits: 8
TrainConfig:
epochs: 1
eval_iter: 500
learning_rate:
type: CosineAnnealingDecay
learning_rate: 0.015
epochs: 2
eval_iter: 5000
learning_rate: 0.001
optimizer_builder:
optimizer:
type: Momentum
weight_decay: 0.00002
origin_metric: 0.7532
origin_metric: 0.7896
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册