未验证 提交 6389fb0d 编写于 作者: E Evezerest 提交者: GitHub

Merge pull request #6361 from mohamadmansourX/patch-9

Update README_en.md
......@@ -73,4 +73,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_
The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8.
The derived model can be converted through the `opt tool` of PaddleLite.
For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme_en.md)
For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme.md)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册