After the PC and mobile phone are successfully connected, use the following command to start the model evaluation.
After the PC and mobile phone are successfully connected, use the following command to start the model evaluation.
```
```
sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
```
```
Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows.
Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows.
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model.
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model.
**Note**:
**Note**:
1.File prefix must be assigned in `--output_path`. If `--output_path=./inference/cls_infer`, then three files will be generated in the folder `inference`, they are `cls_infer.pdiparams`, `cls_infer.pdmodel` and `cls_infer.pdiparams.info`.
1.If `--output_path=./inference`, then three files will be generated in the folder `inference`, they are `inference.pdiparams`, `inference.pdmodel` and `inference.pdiparams.info`.
2. In the file `export_model.py:line53`, the `shape` parameter is the shape of the model input image, the default is `224*224`. Please modify it according to the actual situation, as shown below:
2. In the file `export_model.py:line53`, the `shape` parameter is the shape of the model input image, the default is `224*224`. Please modify it according to the actual situation, as shown below:
```python
```python
...
@@ -243,20 +243,20 @@ Among them, the `--model` parameter is used to specify the model name, `--pretra
...
@@ -243,20 +243,20 @@ Among them, the `--model` parameter is used to specify the model name, `--pretra
54])
54])
```
```
The above command will generate the model structure file (`cls_infer.pdmodel`) and the model weight file (`cls_infer.pdiparams`), and then the inference engine can be used for inference:
The above command will generate the model structure file (`inference.pdmodel`) and the model weight file (`inference.pdiparams`), and then the inference engine can be used for inference:
```bash
```bash
python tools/infer/predict.py \
python tools/infer/predict.py \
--image_file image path \
--image_file image path \
--model_file"./inference/cls_infer.pdmodel"\
--model_file"./inference/inference.pdmodel"\
--params_file"./inference/cls_infer.pdiparams"\
--params_file"./inference/inference.pdiparams"\
--use_gpu=True \
--use_gpu=True \
--use_tensorrt=False
--use_tensorrt=False
```
```
Among them:
Among them:
+`image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+`image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+`model_file`: Model file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdmodel`;
+`model_file`: Model file path, such as `./MobileNetV3_large_x1_0/inference.pdmodel`;
+`params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdiparams`;
+`params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/inference.pdiparams`;
+`use_tensorrt`: Whether to use the TesorRT, default by `True`;
+`use_tensorrt`: Whether to use the TesorRT, default by `True`;
+`use_gpu`: Whether to use the GPU, default by `True`
+`use_gpu`: Whether to use the GPU, default by `True`
+`enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.
+`enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.