diff --git a/deploy/slim/quantization/README.md b/deploy/slim/quantization/README.md index 7f1ff7ae22e78cded28f1689d66a5e41dd8950a2..d401d3ba0c8ba209994c43b72a7dbf240fe9dd3d 100644 --- a/deploy/slim/quantization/README.md +++ b/deploy/slim/quantization/README.md @@ -54,4 +54,7 @@ python deploy/slim/quantization/export_model.py -c configs/det/ch_PP-OCRv3/ch_PP ### 5. 量化模型部署 上述步骤导出的量化模型,参数精度仍然是FP32,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。 -量化模型部署的可参考 [移动端模型部署](../../lite/readme.md) + +量化模型移动端部署的可参考 [移动端模型部署](../../lite/readme.md) + +备注:量化训练后的模型参数是float32类型,转inference model预测时相对不量化无加速效果,原因是量化后模型结构之间存在量化和反量化算子,如果要使用量化模型部署,建议使用TensorRT并设置precision为INT8加速量化模型的预测时间。 diff --git a/doc/doc_en/whl_en.md b/doc/doc_en/whl_en.md index 5628dc3f453ff521194159b04df3419d2f182e55..5283391e5ef8b35eb56f0355fd70049f40a4ae04 100644 --- a/doc/doc_en/whl_en.md +++ b/doc/doc_en/whl_en.md @@ -335,7 +335,7 @@ ocr = PaddleOCR(use_angle_cls=True, lang="ch") # need to run only once to downlo img_path = 'PaddleOCR/doc/imgs/11.jpg' img = cv2.imread(img_path) # img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY), If your own training model supports grayscale images, you can uncomment this line -result = ocr.ocr(img_path, cls=True) +result = ocr.ocr(img, cls=True) for idx in range(len(result)): res = result[idx] for line in res: