未验证 提交 afbe7ad0 编写于 作者: L littletomatodonkey 提交者: GitHub

fix visualization (#924)

* fix visualization

* fix doc of en

* add note for rebuld lib
上级 1292e016
Global: Global:
infer_imgs: "./recognition_demo_data_v1.0/gallery/test_logo" infer_imgs: "./recognition_demo_data_v1.0/test_logo"
det_inference_model_dir: "./models/ppyolov2_r50vd_dcn_mainbody_v1.0_infer/" det_inference_model_dir: "./models/ppyolov2_r50vd_dcn_mainbody_v1.0_infer/"
rec_inference_model_dir: "./models/logo_rec_ResNet50_Logo3K_v1.0_infer/" rec_inference_model_dir: "./models/logo_rec_ResNet50_Logo3K_v1.0_infer/"
rec_nms_thresold: 0.05 rec_nms_thresold: 0.05
......
...@@ -26,9 +26,9 @@ def draw_bbox_results(image, ...@@ -26,9 +26,9 @@ def draw_bbox_results(image,
if isinstance(image, np.ndarray): if isinstance(image, np.ndarray):
image = Image.fromarray(image) image = Image.fromarray(image)
draw = ImageDraw.Draw(image) draw = ImageDraw.Draw(image)
font = ImageFont.truetype(font_path, 20, encoding="utf-8") font_size = 18
font = ImageFont.truetype(font_path, font_size, encoding="utf-8")
# color = (0, 255, 0)
color = (0, 102, 255) color = (0, 102, 255)
for result in results: for result in results:
...@@ -38,15 +38,14 @@ def draw_bbox_results(image, ...@@ -38,15 +38,14 @@ def draw_bbox_results(image,
xmin, ymin, xmax, ymax = result["bbox"] xmin, ymin, xmax, ymax = result["bbox"]
text = "{}, {:.2f}".format(result["rec_docs"], result["rec_scores"]) text = "{}, {:.2f}".format(result["rec_docs"], result["rec_scores"])
th = 20 th = font_size
tw = int(len(result["rec_docs"]) * 20) + 60 tw = int(len(result["rec_docs"]) * font_size) + 60
start_y = max(0, ymin - th) start_y = max(0, ymin - th)
# draw.rectangle( draw.rectangle(
# [(xmin + 1, start_y), (xmin + tw + 1, start_y + th)], [(xmin + 1, start_y), (xmin + tw + 1, start_y + th)], fill=color)
# outline=color)
draw.text((xmin + 1, start_y), text, fill=color, font=font) draw.text((xmin + 1, start_y), text, fill=(255, 255, 255), font=font)
draw.rectangle( draw.rectangle(
[(xmin, ymin), (xmax, ymax)], outline=(255, 0, 0), width=2) [(xmin, ymin), (xmax, ymax)], outline=(255, 0, 0), width=2)
......
...@@ -34,32 +34,43 @@ If the image category already exists in the image index database, then you can t ...@@ -34,32 +34,43 @@ If the image category already exists in the image index database, then you can t
The detection model with the recognition inference model for the 4 directions (Logo, Cartoon Face, Vehicle, Product), the address for downloading the test data and the address of the corresponding configuration file are as follows. The detection model with the recognition inference model for the 4 directions (Logo, Cartoon Face, Vehicle, Product), the address for downloading the test data and the address of the corresponding configuration file are as follows.
| Models Introduction | Recommended Scenarios | Test Data Address | inference Model | Predict Config File | Config File to Build Index Database | | Models Introduction | Recommended Scenarios | inference Model | Predict Config File | Config File to Build Index Database |
| ------------ | ------------- | ------- | -------- | ------- | -------- | | ------------ | ------------- | -------- | ------- | -------- |
| Generic mainbody detection model | General Scenarios | - |[Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar) | - | - | | Generic mainbody detection model | General Scenarios |[Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar) | - | - |
| Logo Recognition Model | Logo Scenario | [Data Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/logo_demo_data_v1.0.tar) | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/logo_rec_ResNet50_Logo3K_v1.0_infer.tar) | [inference_logo.yaml](../../../deploy/configs/inference_logo.yaml) | [build_logo.yaml](../../../deploy/configs/build_logo.yaml) | | Logo Recognition Model | Logo Scenario | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/logo_rec_ResNet50_Logo3K_v1.0_infer.tar) | [inference_logo.yaml](../../../deploy/configs/inference_logo.yaml) | [build_logo.yaml](../../../deploy/configs/build_logo.yaml) |
| Cartoon Face Recognition Model| Cartoon Face Scenario | [Data Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/cartoon_demo_data_v1.0.tar) | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/cartoon_rec_ResNet50_iCartoon_v1.0_infer.tar) | [inference_cartoon.yaml](../../../deploy/configs/inference_cartoon.yaml) | [build_cartoon.yaml](../../../deploy/configs/build_cartoon.yaml) | | Cartoon Face Recognition Model| Cartoon Face Scenario | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/cartoon_rec_ResNet50_iCartoon_v1.0_infer.tar) | [inference_cartoon.yaml](../../../deploy/configs/inference_cartoon.yaml) | [build_cartoon.yaml](../../../deploy/configs/build_cartoon.yaml) |
| Vehicle Subclassification Model | Vehicle Scenario | [Data Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/vehicle_demo_data_v1.0.tar) | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/vehicle_cls_ResNet50_CompCars_v1.0_infer.tar) | [inference_vehicle.yaml](../../../deploy/configs/inference_vehicle.yaml) | [build_vehicle.yaml](../../../deploy/configs/build_vehicle.yaml) | | Vehicle Subclassification Model | Vehicle Scenario | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/vehicle_cls_ResNet50_CompCars_v1.0_infer.tar) | [inference_vehicle.yaml](../../../deploy/configs/inference_vehicle.yaml) | [build_vehicle.yaml](../../../deploy/configs/build_vehicle.yaml) |
| Product Recignition Model | Product Scenario | [Data Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/product_demo_data_v1.0.tar) | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_Inshop_v1.0_infer.tar) | [inference_inshop.yaml](../../../deploy/configs/) | [build_inshop.yaml](../../../deploy/configs/build_inshop.yaml) | | Product Recignition Model | Product Scenario | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_Inshop_v1.0_infer.tar) | [inference_inshop.yaml](../../../deploy/configs/) | [build_inshop.yaml](../../../deploy/configs/build_inshop.yaml) |
**Attention**:If you do not have wget installed on Windows, you can download the model by copying the link into your browser and unzipping it in the appropriate folder; for Linux or macOS users, you can right-click and copy the download link to download it via the `wget` command. Demo data in this tutorial can be downloaded here: [download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/recognition_demo_data_v1.0.tar).
* You can download and unzip the data and models by following the command below **Attention**
1. If you do not have wget installed on Windows, you can download the model by copying the link into your browser and unzipping it in the appropriate folder; for Linux or macOS users, you can right-click and copy the download link to download it via the `wget` command.
2. If you want to install `wget` on macOS, you can run the following command.
```shell ```shell
mkdir dataset # install homebrew
cd dataset ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)";
# Download the demo data and unzip # install wget
wget {Data download link} && tar -xf {Name of the tar archive} brew install wget
cd .. ```
3. If you want to isntall `wget` on Windows, you can refer to [link](https://www.cnblogs.com/jeshy/p/10518062.html). If you want to install `tar` on Windows, you can refer to [link](https://www.cnblogs.com/chooperman/p/14190107.html).
* You can download and unzip the data and models by following the command below
```shell
mkdir models mkdir models
cd models cd models
# Download and unzip the inference model # Download and unzip the inference model
wget {Models download link} && tar -xf {Name of the tar archive} wget {Models download link} && tar -xf {Name of the tar archive}
cd .. cd ..
# Download the demo data and unzip
wget {Data download link} && tar -xf {Name of the tar archive}
``` ```
...@@ -75,27 +86,28 @@ cd models ...@@ -75,27 +86,28 @@ cd models
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar && tar -xf ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar && tar -xf ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar
# Download and unpack the inference model # Download and unpack the inference model
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_aliproduct_v1.0_infer.tar && tar -xf product_ResNet50_vd_aliproduct_v1.0_infer.tar wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_aliproduct_v1.0_infer.tar && tar -xf product_ResNet50_vd_aliproduct_v1.0_infer.tar
cd .. cd ..
mkdir dataset
cd dataset
# Download the demo data and unzip it # Download the demo data and unzip it
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/product_demo_data_v1.0.tar && tar -xf product_demo_data_v1.0.tar wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/recognition_demo_data_v1.0.tar && tar -xf recognition_demo_data_v1.0.tar
cd ..
``` ```
Once unpacked, the `dataset` folder should have the following file structure. Once unpacked, the `recognition_demo_data_v1.0` folder should have the following file structure.
``` ```
├── product_demo_data_v1.0 ├── recognition_demo_data_v1.0
│ ├── data_file.txt │ ├── gallery_cartoon
│ ├── gallery │ ├── gallery_logo
│ ├── index │ ├── gallery_product
│ └── query │ ├── gallery_vehicle
│ ├── test_cartoon
│ ├── test_logo
│ ├── test_product
│ └── test_vehicle
├── ... ├── ...
``` ```
The `data_file.txt` is images list used to build the index database, the `gallery` folder contains all the original images used to build the index database, the `index` folder contains the index files generated by building the index database, and the `query` is the demo image used to test the recognition effect. here, original images to build index are in folder `gallery_xxx`, test images are in folder `test_xxx`. You can also access specific folder for more details.
The `models` folder should have the following file structure. The `models` folder should have the following file structure.
...@@ -119,33 +131,39 @@ Take the product recognition demo as an example to show the recognition and retr ...@@ -119,33 +131,39 @@ Take the product recognition demo as an example to show the recognition and retr
<a name="recognition_of_single_image"></a> <a name="recognition_of_single_image"></a>
#### 2.2.1 Single Image Recognition #### 2.2.1 Single Image Recognition
Run the following command to identify and retrieve the image `./dataset/product_demo_data_v1.0/query/wangzai.jpg` for recognition and retrieval Run the following command to identify and retrieve the image `./recognition_demo_data_v1.0/test_product/daoxiangcunjinzhubing_6.jpg` for recognition and retrieval
```shell ```shell
# use the following command to predict using GPU.
python3.7 python/predict_system.py -c configs/inference_product.yaml
# use the following command to predict using CPU
python3.7 python/predict_system.py -c configs/inference_product.yaml python3.7 python/predict_system.py -c configs/inference_product.yaml
``` ```
**Note:** Program lib used to build index is compliled on our machine, if error occured because of the environment, you can refer to [vector search tutorial](../../../deploy/vector_search/README.md) to rebuild the lib.
The image to be retrieved is shown below. The image to be retrieved is shown below.
<div align="center"> <div align="center">
<img src="../../images/recognition/product_demo/wangzai.jpg" width = "400" /> <img src="../../images/recognition/product_demo/query/daoxiangcunjinzhubing_6.jpg" width = "400" />
</div> </div>
The final output is shown below. The final output is shown below.
``` ```
[{'bbox': [305, 226, 776, 930], 'rec_docs': ['旺仔牛奶', '旺仔牛奶', '旺仔牛奶', '旺仔牛奶', '康师傅方便面'], 'rec_scores': array([1328.1072998 , 1185.92248535, 846.88220215, 746.28546143 622.2668457 ])} [{'bbox': [287, 129, 497, 326], 'rec_docs': '稻香村金猪饼', 'rec_scores': 0.8309420943260193}, {'bbox': [99, 242, 313, 426], 'rec_docs': '稻香村金猪饼', 'rec_scores': 0.7245652079582214}]
``` ```
where bbox indicates the location of the detected subject, rec_docs indicates the labels corresponding to a number of images in the index dabase that are most similar to the detected subject, and rec_scores indicates the corresponding similarity.
There are 4 `旺仔牛奶` results in 5, the recognition result is correct. where bbox indicates the location of the detected object, rec_docs indicates the labels corresponding to the label in the index dabase that are most similar to the detected object, and rec_scores indicates the corresponding confidence.
The detection result is also saved in the folder `output`, which is shown as follows. The detection result is also saved in the folder `output`, for this image, the visualization result is as follows.
<div align="center"> <div align="center">
<img src="../../images/recognition/product_demo/wangzai_det_result.jpg" width = "400" /> <img src="../../images/recognition/product_demo/result/daoxiangcunjinzhubing_6.jpg" width = "400" />
</div> </div>
...@@ -155,34 +173,49 @@ The detection result is also saved in the folder `output`, which is shown as fol ...@@ -155,34 +173,49 @@ The detection result is also saved in the folder `output`, which is shown as fol
If you want to predict the images in the folder, you can directly modify the `Global.infer_imgs` field in the configuration file, or you can also modify the corresponding configuration through the following `-o` parameter. If you want to predict the images in the folder, you can directly modify the `Global.infer_imgs` field in the configuration file, or you can also modify the corresponding configuration through the following `-o` parameter.
```shell ```shell
python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.infer_imgs="./dataset/product_demo_data_v1.0/query/" # using the following command to predict using GPU, you can append `-o Global.use_gpu=False` to predict using CPU.
python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.infer_imgs="./recognition_demo_data_v1.0/test_product/"
``` ```
The results on the screen are shown as following.
```
...
[{'bbox': [37, 29, 123, 89], 'rec_docs': '香奈儿包', 'rec_scores': 0.6163763999938965}, {'bbox': [153, 96, 235, 175], 'rec_docs': '香奈儿包', 'rec_scores': 0.5279821157455444}]
[{'bbox': [735, 562, 1133, 851], 'rec_docs': '香奈儿包', 'rec_scores': 0.5588355660438538}]
[{'bbox': [124, 50, 230, 129], 'rec_docs': '香奈儿包', 'rec_scores': 0.6980369687080383}]
[{'bbox': [0, 0, 275, 183], 'rec_docs': '香奈儿包', 'rec_scores': 0.5818190574645996}]
[{'bbox': [400, 1179, 905, 1537], 'rec_docs': '香奈儿包', 'rec_scores': 0.9814301133155823}]
[{'bbox': [544, 4, 1482, 932], 'rec_docs': '香奈儿包', 'rec_scores': 0.5143815279006958}]
[{'bbox': [29, 42, 194, 183], 'rec_docs': '香奈儿包', 'rec_scores': 0.9543638229370117}]
...
```
All the visualization results are also saved in folder `output`.
Furthermore, the recognition inference model path can be changed by modifying the `Global.rec_inference_model_dir` field, and the path of the index to the index databass can be changed by modifying the `IndexProcess.index_path` field. Furthermore, the recognition inference model path can be changed by modifying the `Global.rec_inference_model_dir` field, and the path of the index to the index databass can be changed by modifying the `IndexProcess.index_path` field.
<a name="unkonw_category_image_recognition_experience"></a> <a name="unkonw_category_image_recognition_experience"></a>
## 3. Recognize Images of Unknown Category ## 3. Recognize Images of Unknown Category
To recognize the image `./dataset/product_demo_data_v1.0/query/anmuxi.jpg`, run the command as follows: To recognize the image `./recognition_demo_data_v1.0/test_product/anmuxi.jpg`, run the command as follows:
```shell ```shell
python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.infer_imgs="./dataset/product_demo_data_v1.0/query/anmuxi.jpg" python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.infer_imgs="./recognition_demo_data_v1.0/test_product/anmuxi.jpg"
``` ```
The image to be retrieved is shown below. The image to be retrieved is shown below.
<div align="center"> <div align="center">
<img src="../../images/recognition/product_demo/anmuxi.jpg" width = "400" /> <img src="../../images/recognition/product_demo/query/anmuxi.jpg" width = "400" />
</div> </div>
The output is as follows: The output is empty.
```
[{'bbox': [243, 80, 523, 522], 'rec_docs': ['娃哈哈AD钙奶', '旺仔牛奶', '娃哈哈AD钙奶', '农夫山泉矿泉水', '红牛'], 'rec_scores': array([548.33282471, 411.85687256, 408.39770508, 400.89404297, 360.41540527])}]
```
Since the index infomation is not included in the corresponding index databse, the recognition results are not proper. At this time, we can complete the image recognition of unknown categories by constructing a new index database. Since the index infomation is not included in the corresponding index databse, the recognition result is empty or not proper. At this time, we can complete the image recognition of unknown categories by constructing a new index database.
When the index database cannot cover the scenes we actually recognise, i.e. when predicting images of unknown categories, we need to add similar images of the corresponding categories to the index databasey, thus completing the recognition of images of unknown categories ,which does not require retraining. When the index database cannot cover the scenes we actually recognise, i.e. when predicting images of unknown categories, we need to add similar images of the corresponding categories to the index databasey, thus completing the recognition of images of unknown categories ,which does not require retraining.
...@@ -192,14 +225,14 @@ When the index database cannot cover the scenes we actually recognise, i.e. when ...@@ -192,14 +225,14 @@ When the index database cannot cover the scenes we actually recognise, i.e. when
First, you need to copy the images which are similar with the image to retrieval to the original images for the index database. The command is as follows. First, you need to copy the images which are similar with the image to retrieval to the original images for the index database. The command is as follows.
```shell ```shell
cp -r ../docs/images/recognition/product_demo/gallery/anmuxi ./dataset/product_demo_data_v1.0/gallery/ cp -r ../docs/images/recognition/product_demo/gallery/anmuxi ./recognition_demo_data_v1.0/gallery_product/gallery/
``` ```
Then you need to create a new label file which records the image path and label information. Use the following command to create a new file based on the original one. Then you need to create a new label file which records the image path and label information. Use the following command to create a new file based on the original one.
```shell ```shell
# copy the file # copy the file
cp dataset/product_demo_data_v1.0/data_file.txt dataset/product_demo_data_v1.0/data_file_update.txt cp recognition_demo_data_v1.0/gallery_product/data_file.txt recognition_demo_data_v1.0/gallery_product/data_file_update.txt
``` ```
Then add some new lines into the new label file, which is shown as follows. Then add some new lines into the new label file, which is shown as follows.
...@@ -213,7 +246,7 @@ gallery/anmuxi/005.jpg 安慕希酸奶 ...@@ -213,7 +246,7 @@ gallery/anmuxi/005.jpg 安慕希酸奶
gallery/anmuxi/006.jpg 安慕希酸奶 gallery/anmuxi/006.jpg 安慕希酸奶
``` ```
Each line can be splited into two fields. The first field denotes the relative image path, and the second field denotes its label. The `delimiter` is `space` here. Each line can be splited into two fields. The first field denotes the relative image path, and the second field denotes its label. The `delimiter` is `tab` here.
<a name="build_a_new_index_library"></a> <a name="build_a_new_index_library"></a>
...@@ -222,25 +255,30 @@ Each line can be splited into two fields. The first field denotes the relative i ...@@ -222,25 +255,30 @@ Each line can be splited into two fields. The first field denotes the relative i
Use the following command to build the index to accelerate the retrieval process after recognition. Use the following command to build the index to accelerate the retrieval process after recognition.
```shell ```shell
python3.7 python/build_gallery.py -c configs/build_product.yaml -o IndexProcess.data_file="./dataset/product_demo_data_v1.0/data_file_update.txt" -o IndexProcess.index_path="./dataset/product_demo_data_v1.0/index_update" python3.7 python/build_gallery.py -c configs/build_product.yaml -o IndexProcess.data_file="./recognition_demo_data_v1.0/gallery_product/data_file_update.txt" -o IndexProcess.index_path="./recognition_demo_data_v1.0/gallery_product/index_update"
``` ```
Finally, the new index information is stored in the folder`./dataset/product_demo_data_v1.0/index_update`. Use the new index database for the above index. Finally, the new index information is stored in the folder`./recognition_demo_data_v1.0/gallery_product/index_update`. Use the new index database for the above index.
<a name="Image_differentiation_based_on_the_new_index_library"></a> <a name="Image_differentiation_based_on_the_new_index_library"></a>
### 3.2 Recognize the Unknown Category Images ### 3.2 Recognize the Unknown Category Images
To recognize the image `./dataset/product_demo_data_v1.0/query/anmuxi.jpg`, run the command as follows. To recognize the image `./recognition_demo_data_v1.0/test_product/anmuxi.jpg`, run the command as follows.
```shell ```shell
python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.infer_imgs="./dataset/product_demo_data_v1.0/query/anmuxi.jpg" -o IndexProcess.index_path="./dataset/product_demo_data_v1.0/index_update" # using the following command to predict using GPU, you can append `-o Global.use_gpu=False` to predict using CPU.
python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.infer_imgs="./recognition_demo_data_v1.0/test_product/anmuxi.jpg" -o IndexProcess.index_path="./recognition_demo_data_v1.0/gallery_product/index_update"
``` ```
The output is as follows: The output is as follows:
``` ```
[{'bbox': [243, 80, 523, 522], 'rec_docs': ['安慕希酸奶', '娃哈哈AD钙奶', '安慕希酸奶', '安慕希酸奶', '安慕希酸奶'], 'rec_scores': array([1214.9597168 , 548.33282471, 547.82104492, 535.13201904, 471.52706909])}] [{'bbox': [243, 80, 523, 522], 'rec_docs': '安慕希酸奶', 'rec_scores': 0.5570770502090454}]
``` ```
There are 4 `安慕希酸奶` results in 5, the recognition result is correct. The final recognition result is `安慕希酸奶`, which is corrrect, the visualization result is as follows.
<div align="center">
<img src="../../images/recognition/product_demo/result/anmuxi.jpg" width = "400" />
</div>
...@@ -157,7 +157,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.u ...@@ -157,7 +157,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.u
其中bbox表示检测出的主体所在位置,rec_docs表示索引库中与检测框最为相似的类别,rec_scores表示对应的置信度。 其中bbox表示检测出的主体所在位置,rec_docs表示索引库中与检测框最为相似的类别,rec_scores表示对应的置信度。
检测的可视化结果也保存在`output`文件夹下,对于本图像,识别结果可视化如下所示。 检测的可视化结果也保存在`output`文件夹下,对于本图像,识别结果可视化如下所示。
<div align="center"> <div align="center">
<img src="../../images/recognition/product_demo/result/daoxiangcunjinzhubing_6.jpg" width = "400" /> <img src="../../images/recognition/product_demo/result/daoxiangcunjinzhubing_6.jpg" width = "400" />
...@@ -275,7 +275,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.i ...@@ -275,7 +275,7 @@ python3.7 python/predict_system.py -c configs/inference_product.yaml -o Global.i
[{'bbox': [243, 80, 523, 522], 'rec_docs': '安慕希酸奶', 'rec_scores': 0.5570770502090454}] [{'bbox': [243, 80, 523, 522], 'rec_docs': '安慕希酸奶', 'rec_scores': 0.5570770502090454}]
``` ```
最终返回结果为`安慕希酸奶`,识别正确,识别结果可视化如下所示。 最终识别结果为`安慕希酸奶`,识别正确,识别结果可视化如下所示。
<div align="center"> <div align="center">
<img src="../../images/recognition/product_demo/result/anmuxi.jpg" width = "400" /> <img src="../../images/recognition/product_demo/result/anmuxi.jpg" width = "400" />
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册