quick_start_recognition_en.md 24.4 KB
Newer Older
L
littletomatodonkey 已提交
1 2
# Quick Start of Recognition

H
HydrogenSulfate 已提交
3
This document contains 2 parts: PP-ShiTu android demo quick start and PP-ShiTu PC demo quick start.
L
littletomatodonkey 已提交
4

H
HydrogenSulfate 已提交
5
If the image category already exists in the image index library, you can directly refer to the [Image Recognition Experience](#image recognition experience) chapter to complete the image recognition process; if you want to recognize images of unknown classes, that is, the image category did not exist in the index library before , then you can refer to the [Unknown Category Image Recognition Experience](#Unknown Category Image Recognition Experience) chapter to complete the process of indexing and recognition.
L
littletomatodonkey 已提交
6 7 8

## Catalogue

H
HydrogenSulfate 已提交
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
- [1. PP-ShiTu android demo for quick start](#1-pp-shitu-android-demo-for-quick-start)
  - [1.1 Install PP-ShiTu android demo](#11-install-pp-shitu-android-demo)
  - [1.2 Feature Experience](#12-feature-experience)
    - [1.2.1 Image Retrieval](#121-image-retrieval)
    - [1.2.2 Update Index](#122-update-index)
    - [1.2.3 Save Index](#123-save-index)
    - [1.2.4 Initialize Index](#124-initialize-index)
    - [1.2.5 Preview Index](#125-preview-index)
  - [1.3 Feature Details](#13-feature-details)
    - [1.3.1 Image Retrieval](#131-image-retrieval)
    - [1.3.2 Update Index](#132-update-index)
    - [1.3.3 Save Index](#133-save-index)
    - [1.3.4 Initialize Index](#134-initialize-index)
    - [1.3.5 Preview Index](#135-preview-index)
- [2. PP-ShiTu PC demo for quick start](#2-pp-shitu-pc-demo-for-quick-start)
  - [2.1 Environment configuration](#21-environment-configuration)
  - [2.2 Image recognition experience](#22-image-recognition-experience)
    - [2.2.1 Download and unzip the inference model and demo data](#221-download-and-unzip-the-inference-model-and-demo-data)
    - [2.2.2 Drink recognition and retrieval](#222-drink-recognition-and-retrieval)
      - [2.2.2.1 single image recognition](#2221-single-image-recognition)
      - [2.2.2.2 Folder-based batch recognition](#2222-folder-based-batch-recognition)
  - [2.3 Image of Unknown categories recognition experience](#23-image-of-unknown-categories-recognition-experience)
    - [2.3.1 Prepare new data and labels](#231-prepare-new-data-and-labels)
    - [2.3.2 Create a new index database](#232-create-a-new-index-database)
    - [2.3.3 Image recognition based on the new index database](#233-image-recognition-based-on-the-new-index-database)
  - [2.4 List of server recognition models](#24-list-of-server-recognition-models)
L
littletomatodonkey 已提交
35

H
HydrogenSulfate 已提交
36
<a name="PP-ShiTu android quick start"></a>
L
littletomatodonkey 已提交
37

H
HydrogenSulfate 已提交
38
## 1. PP-ShiTu android demo for quick start
L
littletomatodonkey 已提交
39

H
HydrogenSulfate 已提交
40
<a name="install"></a>
L
littletomatodonkey 已提交
41

H
HydrogenSulfate 已提交
42
### 1.1 Install PP-ShiTu android demo
L
littletomatodonkey 已提交
43

H
HydrogenSulfate 已提交
44
You can download and install the APP by scanning the QR code or [click the link](https://paddle-imagenet-models-name.bj.bcebos.com/demos/PP-ShiTu.apk)
L
littletomatodonkey 已提交
45

H
HydrogenSulfate 已提交
46
<div align=center><img src="../../images/quick_start/android_demo/PPShiTu_qrcode.png" height="400" width="400"/></div>
L
littletomatodonkey 已提交
47

H
HydrogenSulfate 已提交
48
<a name="Feature Experience"></a>
L
littletomatodonkey 已提交
49

H
HydrogenSulfate 已提交
50 51
### 1.2 Feature Experience
At present, the PP-ShiTu android demo has basic features such as image retrieval, add image to the index database, saving the index database, initializing the index database, and viewing the index database. Next, we will introduce how to experience these features.
L
littletomatodonkey 已提交
52

H
HydrogenSulfate 已提交
53 54
#### 1.2.1 Image Retrieval
Click the "photo recognition" button below <img src="../../images/quick_start/android_demo/paizhaoshibie_100.png" width="25" height="25"/> or the "file recognition" button<img src ="../../images/quick_start/android_demo/bendishibie_100.png" width="25" height="25"/>, you can take an image or select an image, then wait a few seconds, main object in the image will be marked and the predicted class and inference time will be shown below the image.
C
cuicheng01 已提交
55

H
HydrogenSulfate 已提交
56
Take the following image as an example:
L
littletomatodonkey 已提交
57

H
HydrogenSulfate 已提交
58
<img src="../../images/recognition/drink_data_demo/test_images/nongfu_spring.jpeg" width="400" height="600"/>
L
littletomatodonkey 已提交
59

H
HydrogenSulfate 已提交
60
The retrieval results obtained are visualized as follows:
L
littletomatodonkey 已提交
61

H
HydrogenSulfate 已提交
62
<img src="../../images/quick_start/android_demo/android_nongfu_spring.JPG" width="400" height="800"/>
L
littletomatodonkey 已提交
63

H
HydrogenSulfate 已提交
64 65
#### 1.2.2 Update Index
Click the "photo upload" button above <img src="../../images/quick_start/android_demo/paizhaoshangchuan_100.png" width="25" height="25"/> or the "file upload" button <img src ="../../images/quick_start/android_demo/bendishangchuan_100.png" width="25" height="25"/>, you can take an image or select an image and enter the class name of the uploaded image (such as `keyboard`), click the "OK" button, then the feature vector and classname corresponding to the image will be added to the index database.
L
littletomatodonkey 已提交
66

H
HydrogenSulfate 已提交
67 68
#### 1.2.3 Save Index
Click the "save index" button above <img src="../../images/quick_start/android_demo/baocunxiugai_100.png" width="25" height="25"/>, you can save the current index database as `latest`.
L
littletomatodonkey 已提交
69

H
HydrogenSulfate 已提交
70 71
#### 1.2.4 Initialize Index
Click the "initialize index" button above <img src="../../images/quick_start/android_demo/reset_100.png" width="25" height="25"/> to initialize the current library to `original`.
L
littletomatodonkey 已提交
72

H
HydrogenSulfate 已提交
73
#### 1.2.5 Preview Index
G
gaotingquan 已提交
74
Click the "class preview" button <img src="../../images/quick_start/android_demo/leibiechaxun_100.png" width="25" height="25"/> to view it in the pop-up window.
L
littletomatodonkey 已提交
75

H
HydrogenSulfate 已提交
76
<a name="Feature introduction"></a>
L
littletomatodonkey 已提交
77

H
HydrogenSulfate 已提交
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
### 1.3 Feature Details

#### 1.3.1 Image Retrieval
After selecting the image to be retrieved, firstly, the mainbody detection will be performed through the detection model to obtain the bounding box of ​the object in the image, and then the image will be cropped and is input into the feature extraction model to obtain the corresponding feature vector and retrieved in the index database, returns and displays the final search result.

#### 1.3.2 Update Index
After selecting the picture to be stored, firstly, the mainbody detection will be performed through the detection model to obtain the bounding box of ​the object in the image, and then the image will be cropped and is input into the feature extraction model to obtain the corresponding feature vector, and then added into index database.

#### 1.3.3 Save Index
Save the index database in the current program index database name of `latest`, and automatically switch to `latest`. The saving logic is similar to "Save As" in general software. If the current index is already `latest`, it will be automatically overwritten, or it will switch to `latest`.

#### 1.3.4 Initialize Index
When initializing the index database, it will automatically switch the search index database to `original.index` and `original.txt`, and automatically delete `latest.index` and `latest.txt` (if exists).

#### 1.3.5 Preview Index
One can preview it according to the instructions in [Function Experience - Preview Index](#125-preview-index).


## 2. PP-ShiTu PC demo for quick start

<a name="Environment Configuration"></a>

### 2.1 Environment configuration

G
gaotingquan 已提交
102
* Installation: Please refer to the document [Environment Preparation](../installation/install_paddleclas_en.md) to configure the PaddleClas operating environment.
H
HydrogenSulfate 已提交
103 104 105 106 107 108

* Go to the `deploy` run directory. All the content and scripts in this section need to be run in the `deploy` directory, you can enter the `deploy` directory with the following scripts.

  ```shell
  cd deploy
  ```
L
littletomatodonkey 已提交
109

H
HydrogenSulfate 已提交
110
<a name="Image Recognition Experience"></a>
L
littletomatodonkey 已提交
111

H
HydrogenSulfate 已提交
112
### 2.2 Image recognition experience
L
littletomatodonkey 已提交
113

H
HydrogenSulfate 已提交
114 115 116 117 118 119 120 121 122 123 124 125 126
The lightweight general object detection model, lightweight general recognition model and configuration file are available in following table.

<a name="Lightweight General object Detection Model and Lightweight General Recognition Model"></a>

| Model Introduction                         | Recommended Scenarios | Inference Model    | Prediction Profile                                                       |
| ------------------------------------------ | --------------------- | ------------------ | ------------------------------------------------------------------------ |
| Lightweight General MainBody Detection Model | General Scene       | [tar format download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainMainBody_lite_v1.0_infer.tar ) \| [zip format download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainMainBody_lite_v1.0_infer.zip)                     | -                                                                        |
| Lightweight General Recognition Model      | General Scene         | [tar format download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/PP-ShiTuV2/general_PPLCNetV2_base_pretrained_v1.0_infer.tar) \| [zip format download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/PP-ShiTuV2/general_PPLCNetV2_base_pretrained_v1.0_infer.zip) | [inference_general.yaml](../../../deploy/configs/inference_general.yaml) |

Note: Since some decompression software has problems in decompressing the above `tar` format files, it is recommended that non-script line users download the `zip` format files and decompress them. `tar` format file is recommended to use the script `tar -xf xxx.tar`unzip.

The demo data download path of this chapter is as follows: [drink_dataset_v2.0.tar (drink data)](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v2.0.tar),

G
gaotingquan 已提交
127 128 129 130
The following takes **drink_dataset_v2.0.tar** as an example to introduce the PP-ShiTu quick start process on the PC.

<!-- TODO -->
<!-- Users can also download and decompress the data of other scenarios to experience: [22 scenarios data download](../../zh_CN/introduction/ppshitu_application_scenarios.md#22-下载解压场景库数据). -->
H
HydrogenSulfate 已提交
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150

If you want to experience the server object detection and the recognition model of each scene, you can refer to [2.4 Server recognition model list](#24-list-of-server-identification-models)

**Notice**

- If wget is not installed in the windows environment, you can install the `wget` and tar scripts according to the following steps, or you can copy the link to the browser to download the model, decompress it and place it in the corresponding directory.
- If the `wget` script is not installed in the macOS environment, you can run the following script to install it.
    ```shell
    # install homebrew
    ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)";
    # install wget
    brew install wget
    ```
- If you want to install `wget` in the windows environment, you can refer to: [link](https://www.cnblogs.com/jeshy/p/10518062.html); if you want to install the `tar` script in the windows environment, you can refer to: [Link](https://www.cnblogs.com/chooperman/p/14190107.html).

<a name="2.2.1"></a>

#### 2.2.1 Download and unzip the inference model and demo data

Download the demo dataset and the lightweight subject detection and recognition model. The scripts are as follows.
L
littletomatodonkey 已提交
151 152 153 154

```shell
mkdir models
cd models
H
HydrogenSulfate 已提交
155 156 157 158 159 160 161 162
# Download the mainbody detection inference model and unzip it
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar && tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
# Download the feature extraction inference model and unzip it
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/PP-ShiTuV2/general_PPLCNetV2_base_pretrained_v1.0_infer.tar && tar -xf general_PPLCNetV2_base_pretrained_v1.0_infer.tar

cd ../
# Download demo data and unzip it
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v2.0.tar && tar -xf drink_dataset_v2.0.tar
L
littletomatodonkey 已提交
163 164
```

H
HydrogenSulfate 已提交
165
After decompression, the `drink_dataset_v2.0/` folder be structured as follows:
L
littletomatodonkey 已提交
166

H
HydrogenSulfate 已提交
167 168 169 170 171 172
```log
├── drink_dataset_v2.0/
│   ├── gallery/
│   ├── index/
│   ├── index_all/
│   └── test_images/
L
littletomatodonkey 已提交
173 174 175
├── ...
```

H
HydrogenSulfate 已提交
176
The `gallery` folder stores the original images used to build the index database, `index` represents the index database constructed based on the original images, and the `test_images` folder stores the list of images for query.
L
littletomatodonkey 已提交
177

H
HydrogenSulfate 已提交
178
The `models` folder should be structured as follows:
L
littletomatodonkey 已提交
179

H
HydrogenSulfate 已提交
180 181
```log
├── general_PPLCNetV2_base_pretrained_v1.0_infer
L
littletomatodonkey 已提交
182 183 184
│   ├── inference.pdiparams
│   ├── inference.pdiparams.info
│   └── inference.pdmodel
H
HydrogenSulfate 已提交
185
├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer
L
littletomatodonkey 已提交
186 187 188 189 190
│   ├── inference.pdiparams
│   ├── inference.pdiparams.info
│   └── inference.pdmodel
```

H
HydrogenSulfate 已提交
191 192 193
**Notice**

If the general feature extraction model is changed, the index for demo data must be rebuild, as follows:
C
cuicheng01 已提交
194 195

```shell
H
HydrogenSulfate 已提交
196 197 198
python3.7 python/build_gallery.py \
-c configs/inference_general.yaml \
-o Global.rec_inference_model_dir=./models/general_PPLCNetV2_base_pretrained_v1.0_infer
C
cuicheng01 已提交
199 200
```

H
HydrogenSulfate 已提交
201
<a name="Drink Recognition and Retrieval"></a>
D
dongshuilong 已提交
202

H
HydrogenSulfate 已提交
203
#### 2.2.2 Drink recognition and retrieval
D
dongshuilong 已提交
204

H
HydrogenSulfate 已提交
205
Take the drink recognition demo as an example to show the recognition and retrieval process.
L
littletomatodonkey 已提交
206

H
HydrogenSulfate 已提交
207
Note that this section will uses `faiss` as the retrieval tool, and the installation script is as follows:
D
dongshuilong 已提交
208

H
HydrogenSulfate 已提交
209 210 211
```python
python3.7 -m pip install faiss-cpu==1.7.1post2
```
L
littletomatodonkey 已提交
212

H
HydrogenSulfate 已提交
213
If `faiss` cannot be importted, try reinstall it, especially for windows users.
L
littletomatodonkey 已提交
214

H
HydrogenSulfate 已提交
215
<a name="single image recognition"></a>
L
littletomatodonkey 已提交
216

H
HydrogenSulfate 已提交
217
##### 2.2.2.1 single image recognition
L
littletomatodonkey 已提交
218

H
HydrogenSulfate 已提交
219
Run the following script to recognize the image `./drink_dataset_v2.0/test_images/100.jpeg`
L
littletomatodonkey 已提交
220

H
HydrogenSulfate 已提交
221
The images to be retrieved are as follows
L
littletomatodonkey 已提交
222

H
HydrogenSulfate 已提交
223
![](../../images/recognition/drink_data_demo/test_images/100.jpeg)
L
littletomatodonkey 已提交
224

H
HydrogenSulfate 已提交
225 226 227
```shell
# Use the script below to make predictions using the GPU
python3.7 python/predict_system.py -c configs/inference_general.yaml
L
littletomatodonkey 已提交
228

H
HydrogenSulfate 已提交
229 230
# Use the following script to make predictions using the CPU
python3.7 python/predict_system.py -c configs/inference_general.yaml -o Global.use_gpu=False
L
littletomatodonkey 已提交
231
```
L
littletomatodonkey 已提交
232

H
HydrogenSulfate 已提交
233
The final output is as follows.
L
littletomatodonkey 已提交
234

H
HydrogenSulfate 已提交
235 236 237
```log
[{'bbox': [437, 71, 660, 728], 'rec_docs': '元气森林', 'rec_scores': 0.7740249}, {'bbox': [221, 72, 449, 701], 'rec_docs' : '元气森林', 'rec_scores': 0.6950992}, {'bbox': [794, 104, 979, 652], 'rec_docs': '元气森林', 'rec_scores': 0.6305153}]
```
L
littletomatodonkey 已提交
238

H
HydrogenSulfate 已提交
239
Where `bbox` represents the location of the detected object, `rec_docs` represents the most similar category to the detection box in the index database, and `rec_scores` represents the corresponding similarity.
L
littletomatodonkey 已提交
240

H
HydrogenSulfate 已提交
241
The visualization results of the recognition are saved in the `output` folder by default. For this image, the visualization of the recognition results is shown below.
L
littletomatodonkey 已提交
242

H
HydrogenSulfate 已提交
243
![](../../images/recognition/drink_data_demo/output/100.jpeg)
L
littletomatodonkey 已提交
244

H
HydrogenSulfate 已提交
245
<a name="Folder-based batch recognition"></a>
L
littletomatodonkey 已提交
246

H
HydrogenSulfate 已提交
247
##### 2.2.2.2 Folder-based batch recognition
L
littletomatodonkey 已提交
248

H
HydrogenSulfate 已提交
249
If you want to use multi images in the folder for prediction, you can modify the `Global.infer_imgs` field in the configuration file, or you can modify the corresponding configuration through the `-o` parameter below.
L
littletomatodonkey 已提交
250 251

```shell
H
HydrogenSulfate 已提交
252 253
# Use the following script to use GPU for prediction, if you want to use CPU prediction, you can add -o Global.use_gpu=False after the script
python3.7 python/predict_system.py -c configs/inference_general.yaml -o Global.infer_imgs="./drink_dataset_v2.0/test_images/"
L
littletomatodonkey 已提交
254 255
```

H
HydrogenSulfate 已提交
256
The recognition results of all images in the folder will be output in the terminal, as shown below.
L
littletomatodonkey 已提交
257

H
HydrogenSulfate 已提交
258
```log
L
littletomatodonkey 已提交
259
...
H
HydrogenSulfate 已提交
260 261 262 263 264 265 266 267 268 269 270
[{'bbox': [0, 0, 600, 600], 'rec_docs': '红牛-强化型', 'rec_scores': 0.74081033}]
Inference: 120.39852142333984 ms per batch image
[{'bbox': [0, 0, 514, 436], 'rec_docs': '康师傅矿物质水', 'rec_scores': 0.6918598}]
Inference: 32.045602798461914 ms per batch image
[{'bbox': [138, 40, 573, 1198], 'rec_docs': '乐虎功能饮料', 'rec_scores': 0.68214047}]
Inference: 113.41428756713867 ms per batch image
[{'bbox': [328, 7, 467, 272], 'rec_docs': '脉动', 'rec_scores': 0.60406065}]
Inference: 122.04337120056152 ms per batch image
[{'bbox': [242, 82, 498, 726], 'rec_docs': '味全_每日C', 'rec_scores': 0.5428652}]
Inference: 37.95266151428223 ms per batch image
[{'bbox': [437, 71, 660, 728], 'rec_docs': '元气森林', 'rec_scores': 0.7740249}, {'bbox': [221, 72, 449, 701], 'rec_docs': '元气森林', 'rec_scores': 0.6950992}, {'bbox': [794, 104, 979, 652], 'rec_docs': '元气森林', 'rec_scores': 0.6305153}]
L
littletomatodonkey 已提交
271 272 273
...
```

H
HydrogenSulfate 已提交
274 275 276
Visualizations of recognition results for all images are also saved in the `output` folder.

Furthermore, you can change the path of the recognition inference model by modifying the `Global.rec_inference_model_dir` field, and change the path of the index database by modifying the `IndexProcess.index_dir` field.
L
littletomatodonkey 已提交
277

H
HydrogenSulfate 已提交
278
<a name="Image of Unknown categories recognition experience"></a>
L
littletomatodonkey 已提交
279

H
HydrogenSulfate 已提交
280
### 2.3 Image of Unknown categories recognition experience
L
littletomatodonkey 已提交
281

H
HydrogenSulfate 已提交
282
Now we try to recognize the unseen image `./drink_dataset_v2.0/test_images/mosilian.jpeg`
L
littletomatodonkey 已提交
283

H
HydrogenSulfate 已提交
284
The images to be retrieved are as follows
L
littletomatodonkey 已提交
285

H
HydrogenSulfate 已提交
286 287 288
![](../../images/recognition/drink_data_demo/test_images/mosilian.jpeg)

Execute the following identification script
L
littletomatodonkey 已提交
289 290

```shell
H
HydrogenSulfate 已提交
291 292
# Use the following script to use GPU for prediction, if you want to use CPU prediction, you can add -o Global.use_gpu=False after the script
python3.7 python/predict_system.py -c configs/inference_general.yaml -o Global.infer_imgs="./drink_dataset_v2.0/test_images/mosilian.jpeg"
L
littletomatodonkey 已提交
293 294
```

H
HydrogenSulfate 已提交
295 296 297 298 299 300 301 302 303
It can be found that the output result is empty

Since the default index database does not contain the unknown category's information, the recognition result here is wrong. At this time, we can achieve the image recognition of unknown classes by building a new index database.

When the images in the index database cannot cover the scene we actually recognize, i.e. recognizing an image of an unknown category, we need to add a similar image(at least one) belong the unknown category to the index database. This process does not require re-training the model. Take `mosilian.jpeg` as an example, just follow the steps below to rebuild a new index database.

<a name="Prepare new data and labels"></a>

#### 2.3.1 Prepare new data and labels
L
littletomatodonkey 已提交
304

H
HydrogenSulfate 已提交
305
First, copy the image(s) belong to unknown category(except the query image) to the original image folder of the index database. Here we already put all the image data in the folder `drink_dataset_v2.0/gallery/`.
L
littletomatodonkey 已提交
306

H
HydrogenSulfate 已提交
307
Then we need to edit the text file that records the image path and label information. Here we already  put the updated label information file in the `drink_dataset_v2.0/gallery/drink_label_all.txt` file. Comparing with the original `drink_dataset_v2.0/gallery/drink_label.txt` label file, it can be found that the index images of the bright and ternary series of milk have been added.
L
littletomatodonkey 已提交
308

H
HydrogenSulfate 已提交
309
In each line of text, the first field represents the relative path of the image, and the second field represents the label information corresponding to the image, separated by the `\t` key (Note: some editors will automatically convert `tab` is `space`, in which case it will cause a file parsing error).
L
littletomatodonkey 已提交
310

H
HydrogenSulfate 已提交
311
<a name="Create a new index database"></a>
L
littletomatodonkey 已提交
312

H
HydrogenSulfate 已提交
313
#### 2.3.2 Create a new index database
L
littletomatodonkey 已提交
314

H
HydrogenSulfate 已提交
315
Build a new index database `index_all` with the following scripts.
L
littletomatodonkey 已提交
316 317

```shell
H
HydrogenSulfate 已提交
318
python3.7 python/build_gallery.py -c configs/inference_general.yaml -o IndexProcess.data_file="./drink_dataset_v2.0/gallery/drink_label_all.txt" -o IndexProcess.index_dir="./drink_dataset_v2.0/index_all"
L
littletomatodonkey 已提交
319 320
```

G
gaotingquan 已提交
321
The final constructed new index database is saved in the folder `./drink_dataset_v2.0/index_all`. For specific instructions on yaml `yaml`, please refer to [Vector Search Documentation](../image_recognition_pipeline/vector_search_en.md).
H
HydrogenSulfate 已提交
322 323 324 325 326 327

<a name="Image recognition based on the new index database"></a>

#### 2.3.3 Image recognition based on the new index database

To re-recognize the `mosilian.jpeg` image using the new index database, run the following scripts.
L
littletomatodonkey 已提交
328 329

```shell
H
HydrogenSulfate 已提交
330 331
# run the following script predict with GPU, if you want to use CPU, you can add -o Global.use_gpu=False after the script
python3.7 python/predict_system.py -c configs/inference_general.yaml -o Global.infer_imgs="./drink_dataset_v2.0/test_images/mosilian.jpeg" -o IndexProcess.index_dir="./drink_dataset_v2.0/index_all"
L
littletomatodonkey 已提交
332 333
```

H
HydrogenSulfate 已提交
334
The output is as follows.
L
littletomatodonkey 已提交
335

H
HydrogenSulfate 已提交
336 337
```log
[{'bbox': [290, 297, 564, 919], 'rec_docs': 'Bright_Mosleyan', 'rec_scores': 0.59137374}]
L
littletomatodonkey 已提交
338 339
```

H
HydrogenSulfate 已提交
340
The final recognition result is `光明_莫斯利安`, we can see the recognition result is correct now , and the visualization of the recognition result is shown below.
L
littletomatodonkey 已提交
341

H
HydrogenSulfate 已提交
342
![](../../images/recognition/drink_data_demo/output/mosilian.jpeg)
L
littletomatodonkey 已提交
343 344


H
HydrogenSulfate 已提交
345
<a name="5"></a>
L
littletomatodonkey 已提交
346

H
HydrogenSulfate 已提交
347
### 2.4 List of server recognition models
L
littletomatodonkey 已提交
348

H
HydrogenSulfate 已提交
349
At present, we recommend to use model in [Lightweight General Object Detection Model and Lightweight General Recognition Model](#22-image-recognition-experience) to get better test results. However, if you want to experience the general recognition model, general object detection model and other recognition model for server, the test data download path, and the corresponding configuration file path are as follows.
L
littletomatodonkey 已提交
350

H
HydrogenSulfate 已提交
351 352 353 354 355 356 357 358
| Model Introduction                | Recommended Scenarios | Inference Model                                                                                                                                     | Prediction Profile                                                        |
| --------------------------------- | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- |
| General Body Detection Model      | General Scene         | [Model download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar)    | -                                                                         |
| Logo Recognition Model            | Logo Scene            | [Model download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/logo_rec_ResNet50_Logo3K_v1.0_infer.tar)       | [inference_logo. yaml](../../../deploy/configs/inference_logo.yaml)       |
| Anime Character Recognition Model | Anime Character Scene | [Model download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/cartoon_rec_ResNet50_iCartoon_v1.0_infer.tar)  | [ inference_cartoon.yaml](../../../deploy/configs/inference_cartoon.yaml) |
| Vehicle Subdivision Model         | Vehicle Scene         | [Model download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/vehicle_cls_ResNet50_CompCars_v1.0_infer.tar)  | [inference_vehicle .yaml](../../../deploy/configs/inference_vehicle.yaml) |
| Product Recognition Model         | Product Scene         | [Model Download Link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/product_ResNet50_vd_aliproduct_v1.0_infer.tar) | [inference_product. yaml](../../../deploy/configs/inference_product.yaml) |
| Vehicle ReID Model                | Vehicle ReID Scene    | [Model download link](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/vehicle_reid_ResNet50_VERIWild_v1.0_infer.tar) | [inference_vehicle .yaml](../../../deploy/configs/inference_vehicle.yaml) |
L
littletomatodonkey 已提交
359

H
HydrogenSulfate 已提交
360 361 362 363
The above models can be downloaded to the `deploy/models` folder by the following script for use in recognition tasks
```shell
cd ./deploy
mkdir -p models
L
littletomatodonkey 已提交
364

H
HydrogenSulfate 已提交
365 366 367 368 369 370 371 372
cd ./models
# Download the generic object detection model for server and unzip it
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar && tar -xf ppyolov2_r50vd_dcn_mainbody_v1.0_infer.tar
# Download the generic recognition model and unzip it
wget {recognize model download link path} && tar -xf {name of compressed package}
```

Then use the following scripts to download the test data for other recognition scenario:
L
littletomatodonkey 已提交
373 374

```shell
H
HydrogenSulfate 已提交
375 376 377 378
# Go back to the deploy directory
cd..
# Download test data and unzip
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/recognition_demo_data_en_v1.1.tar && tar -xf recognition_demo_data_en_v1.1.tar
L
littletomatodonkey 已提交
379 380
```

H
HydrogenSulfate 已提交
381
After decompression, the `recognition_demo_data_v1.1` folder should have the following file structure:
L
littletomatodonkey 已提交
382

H
HydrogenSulfate 已提交
383 384 385 386 387 388 389 390 391 392 393
```log
├── recognition_demo_data_v1.1
│   ├── gallery_cartoon
│   ├── gallery_logo
│   ├── gallery_product
│   ├── gallery_vehicle
│   ├── test_cartoon
│   ├── test_logo
│   ├── test_product
│   └── test_vehicle
├── ...
L
littletomatodonkey 已提交
394 395
```

H
HydrogenSulfate 已提交
396
After downloading the model and test data according to the above steps, you can re-build the index database and test the relevant recognition model.
L
littletomatodonkey 已提交
397

G
gaotingquan 已提交
398
* For more introduction to object detection, please refer to: [Object Detection Tutorial Document](../image_recognition_pipeline/mainbody_detection.md); for the introduction of feature extraction, please refer to: [Feature Extraction Tutorial Document](../image_recognition_pipeline/feature_extraction.md); for the introduction to vector search, please refer to: [vector search tutorial document](../image_recognition_pipeline/vector_search_en.md).