From 8f5f35e6434f70c5ea39eb24cce11c7e1cd0f255 Mon Sep 17 00:00:00 2001 From: YukSing12 Date: Tue, 8 Dec 2020 16:03:50 +0800 Subject: [PATCH] Fix path --- deploy/slim/quantization/README.md | 2 +- deploy/slim/quantization/README_en.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/deploy/slim/quantization/README.md b/deploy/slim/quantization/README.md index e1844367..64ce40bc 100755 --- a/deploy/slim/quantization/README.md +++ b/deploy/slim/quantization/README.md @@ -58,4 +58,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o ### 5. 量化模型部署 上述步骤导出的量化模型,参数精度仍然是FP32,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。 -量化模型部署的可参考 [移动端模型部署](../lite/readme.md) +量化模型部署的可参考 [移动端模型部署](../../lite/readme.md) diff --git a/deploy/slim/quantization/README_en.md b/deploy/slim/quantization/README_en.md index 4f28a542..07fff0f5 100755 --- a/deploy/slim/quantization/README_en.md +++ b/deploy/slim/quantization/README_en.md @@ -65,4 +65,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8. The derived model can be converted through the `opt tool` of PaddleLite. -For quantitative model deployment, please refer to [Mobile terminal model deployment](../lite/readme_en.md) +For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme_en.md) -- GitLab