diff --git a/deploy/slim/quantization/README.md b/deploy/slim/quantization/README.md index e1844367a02a45b681cdec48893ebcf9e59b1e17..64ce40bc1e66612bdec9263cff73a80ea7f7e26f 100755 --- a/deploy/slim/quantization/README.md +++ b/deploy/slim/quantization/README.md @@ -58,4 +58,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o ### 5. 量化模型部署 上述步骤导出的量化模型,参数精度仍然是FP32,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。 -量化模型部署的可参考 [移动端模型部署](../lite/readme.md) +量化模型部署的可参考 [移动端模型部署](../../lite/readme.md) diff --git a/deploy/slim/quantization/README_en.md b/deploy/slim/quantization/README_en.md index 4f28a54245b08a0156a74d3316f9532bf40423fc..07fff0f52c87c5991d4c9309b8bb3d178926c59b 100755 --- a/deploy/slim/quantization/README_en.md +++ b/deploy/slim/quantization/README_en.md @@ -65,4 +65,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8. The derived model can be converted through the `opt tool` of PaddleLite. -For quantitative model deployment, please refer to [Mobile terminal model deployment](../lite/readme_en.md) +For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme_en.md)