未验证 提交 b2594951 编写于 作者: A acosta123 提交者: GitHub

Update doc/fluid/advanced_usage/paddle_slim/paddle_slim_en.md

Co-Authored-By: NHao Wang <31058429+haowang101779990@users.noreply.github.com>
上级 43be8fef
......@@ -40,7 +40,7 @@ Refer to [demos](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/
### Optimized Effects
- For MobileNetV1 model with less redundant information, the convolution core pruning strategy can still reduce the size of the model and maintain as little accuracy loss as possible.
- The distillation strategy can increase the accuracy of the origin model dramatically.
- The distillation strategy can increase the accuracy of the original model dramatically.
- Combination of the quantization strategy and the distillation strategy can reduce the size ande increase the accuracy of the model at the same time.
Refer to [Performance Data and ModelZoo](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/model_zoo.md) for more details
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册