diff --git a/doc/doc_ch/whl.md b/doc/doc_ch/whl.md index 1328f8c5361272d8987b8aa466daf1b966730dad..9df1791d200b031b73de2dbe83fc5bd59acc434f 100644 --- a/doc/doc_ch/whl.md +++ b/doc/doc_ch/whl.md @@ -134,6 +134,37 @@ paddleocr --image_dir PaddleOCR/doc/imgs_words/ch/word_1.jpg --det false ['韩国小馆', 0.9907421] ``` +## 自定义模型 +当内置模型无法满足需求时,需要使用到自己训练的模型。 +首先,参照[inference.md](./inference.md) 第一节转换将检测和识别模型转换为inference模型,然后按照如下方式使用 + +### 代码使用 +```python +from paddleocr import PaddleOCR, draw_ocr +# 检测模型和识别模型路径下必须含有model和params文件 +ocr = PaddleOCR(det_model_dir='your_det_model_dir',rec_model_dir='your_rec_model_dir') +img_path = 'PaddleOCR/doc/imgs/11.jpg' +result = ocr.ocr(img_path) +for line in result: + print(line) + +# 显示结果 +from PIL import Image +image = Image.open(img_path).convert('RGB') +boxes = [line[0] for line in result] +txts = [line[1][0] for line in result] +scores = [line[1][1] for line in result] +im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/simfang.ttf') +im_show = Image.fromarray(im_show) +im_show.save('result.jpg') +``` + +### 通过命令行使用 + +```bash +paddleocr --image_dir PaddleOCR/doc/imgs/11.jpg --det_model_dir your_det_model_dir --rec_model_dir your_rec_model_dir +``` + ## 参数说明 | 字段 | 说明 | 默认值 | diff --git a/doc/doc_en/whl.md b/doc/doc_en/whl_en.md similarity index 91% rename from doc/doc_en/whl.md rename to doc/doc_en/whl_en.md index 2edf203742f48cb7f8d1095a363d0ebd24ae717c..3e12a9b5fadd5f522509a3c2a1a522d9cef92e1d 100644 --- a/doc/doc_en/whl.md +++ b/doc/doc_en/whl_en.md @@ -138,6 +138,38 @@ Output will be a list, each item contains text and recognition confidence ['PAIN', 0.990372] ``` +## Use custom model +When the built-in model cannot meet the needs, you need to use your own trained model. +First, refer to the first section of [inference_en.md](./inference_en.md) to convert your det and rec model to inference model, and then use it as follows + +### 1. Use by code + +```python +from paddleocr import PaddleOCR,draw_ocr +# The path of detection and recognition model must contain model and params files +ocr = PaddleOCR(det_model_dir='your_det_model_dir',rec_model_dir='your_rec_model_dir') +img_path = 'PaddleOCR/doc/imgs_en/img_12.jpg' +result = ocr.ocr(img_path) +for line in result: + print(line) + +# draw result +from PIL import Image +image = Image.open(img_path).convert('RGB') +boxes = [line[0] for line in result] +txts = [line[1][0] for line in result] +scores = [line[1][1] for line in result] +im_show = draw_ocr(image, boxes, txts, scores, font_path='/path/to/PaddleOCR/doc/simfang.ttf') +im_show = Image.fromarray(im_show) +im_show.save('result.jpg') +``` + +### Use by command line + +```bash +paddleocr --image_dir PaddleOCR/doc/imgs/11.jpg --det_model_dir your_det_model_dir --rec_model_dir your_rec_model_dir +``` + ## Parameter Description | Parameter | Description | Default value | diff --git a/setup.py b/setup.py index 284091397dbb5aef15d2a16bf7d9337ed9d44980..7141f170f3afa2be5217faff66a2aeb12dbefcbe 100644 --- a/setup.py +++ b/setup.py @@ -21,7 +21,7 @@ with open('requirments.txt', encoding="utf-8-sig") as f: def readme(): - with open('doc/doc_en/whl.md', encoding="utf-8-sig") as f: + with open('doc/doc_en/whl_en.md', encoding="utf-8-sig") as f: README = f.read() return README