未验证 提交 079694db 编写于 作者: C Chang Xu 提交者: GitHub

Update Acc in ACT imagenet (#1416)

上级 76539798
...@@ -32,17 +32,17 @@ ...@@ -32,17 +32,17 @@
| SqueezeNet1_0 | Baseline | 59.60 | - | 35.98 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/SqueezeNet1_0_infer.tar) | | SqueezeNet1_0 | Baseline | 59.60 | - | 35.98 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/SqueezeNet1_0_infer.tar) |
| SqueezeNet1_0 | 量化+蒸馏 | 59.45 | - | 16.96 | [Config](./configs/SqueezeNet1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/SqueezeNet1_0_QAT.tar) | | SqueezeNet1_0 | 量化+蒸馏 | 59.45 | - | 16.96 | [Config](./configs/SqueezeNet1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/SqueezeNet1_0_QAT.tar) |
| PPLCNetV2_base | Baseline | 76.86 | - | 36.50 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNetV2_base_infer.tar) | | PPLCNetV2_base | Baseline | 76.86 | - | 36.50 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNetV2_base_infer.tar) |
| PPLCNetV2_base | 量化+蒸馏 | 76.43 | - | 15.79 | [Config](./configs/PPLCNetV2_base/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/PPLCNetV2_base_QAT.tar) | | PPLCNetV2_base | 量化+蒸馏 | 76.39 | - | 15.79 | [Config](./configs/PPLCNetV2_base/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/PPLCNetV2_base_QAT.tar) |
| PPHGNet_tiny | Baseline | 79.59 | 2.82 | - | - |[Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPHGNet_tiny_infer.tar) | | PPHGNet_tiny | Baseline | 79.59 | 2.82 | - | - |[Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPHGNet_tiny_infer.tar) |
| PPHGNet_tiny | 量化+蒸馏 | 79.20 | 0.98 | - | [Config](./configs/PPHGNet_tiny/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/PPHGNet_tiny_QAT.tar) | | PPHGNet_tiny | 量化+蒸馏 | 79.24 | 0.98 | - | [Config](./configs/PPHGNet_tiny/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/PPHGNet_tiny_QAT.tar) |
| InceptionV3 | Baseline | 79.14 | 4.79 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/InceptionV3_infer.tar) | | InceptionV3 | Baseline | 79.14 | 4.79 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/InceptionV3_infer.tar) |
| InceptionV3 | 量化+蒸馏 | 78.32 | 1.47 | - | [Config](./configs/InceptionV3/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/InceptionV3_QAT.tar) | | InceptionV3 | 量化+蒸馏 | 78.32 | 1.47 | - | [Config](./configs/InceptionV3/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/InceptionV3_QAT.tar) |
| EfficientNetB0 | Baseline | 77.02 | 1.95 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/EfficientNetB0_infer.tar) | | EfficientNetB0 | Baseline | 77.02 | 1.95 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/EfficientNetB0_infer.tar) |
| EfficientNetB0 | 量化+蒸馏 | 75.39 | 1.44 | - | [Config](./configs/EfficientNetB0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/EfficientNetB0_QAT.tar) | | EfficientNetB0 | 量化+蒸馏 | 75.27 | 1.44 | - | [Config](./configs/EfficientNetB0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/EfficientNetB0_QAT.tar) |
| GhostNet_x1_0 | Baseline | 74.02 | 2.93 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/GhostNet_x1_0_infer.tar) | | GhostNet_x1_0 | Baseline | 74.02 | 2.93 | - | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/GhostNet_x1_0_infer.tar) |
| GhostNet_x1_0 | 量化+蒸馏 | 72.62 | 1.03 | - | [Config](./configs/GhostNet_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/GhostNet_x1_0_QAT.tar) | | GhostNet_x1_0 | 量化+蒸馏 | 72.62 | 1.03 | - | [Config](./configs/GhostNet_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/GhostNet_x1_0_QAT.tar) |
| MobileNetV3_large_x1_0 | Baseline | 75.32 | - | 16.62 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar) | | MobileNetV3_large_x1_0 | Baseline | 75.32 | - | 16.62 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar) |
| MobileNetV3_large_x1_0 | 量化+蒸馏 | 74.41 | - | 9.85 | [Config](./configs/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_QAT.tar) | | MobileNetV3_large_x1_0 | 量化+蒸馏 | 74.04 | - | 9.85 | [Config](./configs/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_QAT.tar) |
| MobileNetV3_large_x1_0_ssld | Baseline | 78.96 | - | 16.62 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_ssld_infer.tar) | | MobileNetV3_large_x1_0_ssld | Baseline | 78.96 | - | 16.62 | - | [Model](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_ssld_infer.tar) |
| MobileNetV3_large_x1_0_ssld | 量化+蒸馏 | 77.17 | - | 9.85 | [Config](./configs/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_ssld_QAT.tar) | | MobileNetV3_large_x1_0_ssld | 量化+蒸馏 | 77.17 | - | 9.85 | [Config](./configs/MobileNetV3_large_x1_0/qat_dis.yaml) | [Model](https://paddle-slim-models.bj.bcebos.com/act/MobileNetV3_large_x1_0_ssld_QAT.tar) |
......
...@@ -11,10 +11,12 @@ Distillation: ...@@ -11,10 +11,12 @@ Distillation:
loss: l2 loss: l2
node: node:
- softmax_1.tmp_0 - softmax_1.tmp_0
Quantization: Quantization:
use_pact: true use_pact: true
activation_bits: 8 activation_bits: 8
is_full_quantize: false is_full_quantize: false
onnx_format: True
activation_quantize_type: moving_average_abs_max activation_quantize_type: moving_average_abs_max
weight_quantize_type: channel_wise_abs_max weight_quantize_type: channel_wise_abs_max
not_quant_pattern: not_quant_pattern:
...@@ -23,6 +25,7 @@ Quantization: ...@@ -23,6 +25,7 @@ Quantization:
- conv2d - conv2d
- depthwise_conv2d - depthwise_conv2d
weight_bits: 8 weight_bits: 8
TrainConfig: TrainConfig:
epochs: 1 epochs: 1
eval_iter: 500 eval_iter: 500
......
...@@ -9,10 +9,12 @@ Global: ...@@ -9,10 +9,12 @@ Global:
Distillation: Distillation:
alpha: 1.0 alpha: 1.0
loss: soft_label loss: soft_label
Quantization: Quantization:
use_pact: true use_pact: true
activation_bits: 8 activation_bits: 8
is_full_quantize: false is_full_quantize: false
onnx_format: True
activation_quantize_type: moving_average_abs_max activation_quantize_type: moving_average_abs_max
weight_quantize_type: channel_wise_abs_max weight_quantize_type: channel_wise_abs_max
not_quant_pattern: not_quant_pattern:
...@@ -22,6 +24,7 @@ Quantization: ...@@ -22,6 +24,7 @@ Quantization:
- depthwise_conv2d - depthwise_conv2d
- matmul - matmul
weight_bits: 8 weight_bits: 8
TrainConfig: TrainConfig:
epochs: 2 epochs: 2
eval_iter: 5000 eval_iter: 5000
......
...@@ -11,10 +11,12 @@ Distillation: ...@@ -11,10 +11,12 @@ Distillation:
loss: l2 loss: l2
node: node:
- softmax_1.tmp_0 - softmax_1.tmp_0
Quantization: Quantization:
use_pact: true use_pact: true
activation_bits: 8 activation_bits: 8
is_full_quantize: false is_full_quantize: false
onnx_format: True
activation_quantize_type: moving_average_abs_max activation_quantize_type: moving_average_abs_max
weight_quantize_type: channel_wise_abs_max weight_quantize_type: channel_wise_abs_max
not_quant_pattern: not_quant_pattern:
...@@ -23,6 +25,7 @@ Quantization: ...@@ -23,6 +25,7 @@ Quantization:
- conv2d - conv2d
- depthwise_conv2d - depthwise_conv2d
weight_bits: 8 weight_bits: 8
TrainConfig: TrainConfig:
epochs: 1 epochs: 1
eval_iter: 500 eval_iter: 500
......
...@@ -11,10 +11,12 @@ Distillation: ...@@ -11,10 +11,12 @@ Distillation:
loss: l2 loss: l2
node: node:
- softmax_1.tmp_0 - softmax_1.tmp_0
Quantization: Quantization:
use_pact: true use_pact: true
activation_bits: 8 activation_bits: 8
is_full_quantize: false is_full_quantize: false
onnx_format: True
activation_quantize_type: moving_average_abs_max activation_quantize_type: moving_average_abs_max
weight_quantize_type: channel_wise_abs_max weight_quantize_type: channel_wise_abs_max
not_quant_pattern: not_quant_pattern:
...@@ -23,6 +25,7 @@ Quantization: ...@@ -23,6 +25,7 @@ Quantization:
- conv2d - conv2d
- depthwise_conv2d - depthwise_conv2d
weight_bits: 8 weight_bits: 8
TrainConfig: TrainConfig:
epochs: 1 epochs: 1
eval_iter: 500 eval_iter: 500
......
...@@ -11,10 +11,12 @@ Distillation: ...@@ -11,10 +11,12 @@ Distillation:
loss: l2 loss: l2
node: node:
- softmax_0.tmp_0 - softmax_0.tmp_0
Quantization: Quantization:
use_pact: true use_pact: true
activation_bits: 8 activation_bits: 8
is_full_quantize: false is_full_quantize: false
onnx_format: True
activation_quantize_type: moving_average_abs_max activation_quantize_type: moving_average_abs_max
weight_quantize_type: channel_wise_abs_max weight_quantize_type: channel_wise_abs_max
not_quant_pattern: not_quant_pattern:
...@@ -23,6 +25,7 @@ Quantization: ...@@ -23,6 +25,7 @@ Quantization:
- conv2d - conv2d
- depthwise_conv2d - depthwise_conv2d
weight_bits: 8 weight_bits: 8
TrainConfig: TrainConfig:
epochs: 1 epochs: 1
eval_iter: 500 eval_iter: 500
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册