未验证 提交 2e0a1e23 编写于 作者: D Double_V 提交者: GitHub

Merge pull request #1350 from YukSing12/develop

Fix path in doc of quantization
......@@ -58,4 +58,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
### 5. 量化模型部署
上述步骤导出的量化模型,参数精度仍然是FP32,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
量化模型部署的可参考 [移动端模型部署](../lite/readme.md)
量化模型部署的可参考 [移动端模型部署](../../lite/readme.md)
......@@ -65,4 +65,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8.
The derived model can be converted through the `opt tool` of PaddleLite.
For quantitative model deployment, please refer to [Mobile terminal model deployment](../lite/readme_en.md)
For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme_en.md)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册