diff --git a/doc/fluid/advanced_usage/paddle_slim/paddle_slim_en.md b/doc/fluid/advanced_usage/paddle_slim/paddle_slim_en.md index 4600bad90085924b6ca10a75c4d44e83ae4fc1ef..11e72a091203ead44a7a86f7b26ee74141351cdd 100644 --- a/doc/fluid/advanced_usage/paddle_slim/paddle_slim_en.md +++ b/doc/fluid/advanced_usage/paddle_slim/paddle_slim_en.md @@ -40,7 +40,7 @@ Refer to [demos](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/ ### Optimized Effects - For MobileNetV1 model with less redundant information, the convolution core pruning strategy can still reduce the size of the model and maintain as little accuracy loss as possible. -- The distillation strategy can increase the accuracy of the origin model dramatically. +- The distillation strategy can increase the accuracy of the original model dramatically. - Combination of the quantization strategy and the distillation strategy can reduce the size ande increase the accuracy of the model at the same time. Refer to [Performance Data and ModelZoo](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/model_zoo.md) for more details