You can modify the `ARCHITECTURE.name` field and `pretrained_model` field in `configs/eval.yaml` to configure the evaluation model, and you also can update the configuration through the `-o` parameter.
The above command will use `./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml` as the configuration file to evaluate the model `./output/MobileNetV3_large_x1_0/best_model/ppcls`. You can also set the evaluation by changing the parameters in the configuration file, or you can update the configuration with the `-o` parameter, as shown above.
Some of the configurable evaluation parameters are described as follows:
*`ARCHITECTURE.name`: Model name
*`pretrained_model`: The path of the model file to be evaluated
*`load_static_weights`: Whether the model to be evaluated is a static graph model
**Note:** When loading the pre-trained model, you need to specify the prefix of the pretrained model, such as [1.3 Resume Training](#13-resume-training).
**Note:** If the model is a dygraph type, you only need to specify the prefix of the model file when loading the model, instead of specifying the suffix, such as [1.3 Resume Training](#13-resume-training).
Among them, `pretrained_model` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
You can modify the `ARCHITECTURE.name` field and `pretrained_model` field in `configs/eval.yaml` to configure the evaluation model, and you also can update the configuration through the `-o` parameter.
About parameter description, see [1.4 Model evaluation](#14-model-evaluation) for details.
<aname="model_infer"></a>
## 3. Use the pre-trained model to predict
...
...
@@ -192,9 +197,9 @@ After the training is completed, you can predict by using the pre-trained model
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model.
**Note**:
...
...
@@ -248,14 +247,14 @@ The above command will generate the model structure file (`cls_infer.pdmodel`) a
```bash
python tools/infer/predict.py \
-i image path \
-m path of __model__\
-p path of __variables__\
--use_gpu=1\
--use_tensorrt=True
--image_file image path \
-m"./inference/cls_infer.pdmodel"\
-p"./inference/cls_infer.pdiparams"\
--use_gpu=True\
--use_tensorrt=False
```
Among them:
+ `image_file`(i): The path of the image file to be predicted, such as `./test.jpeg`;
+`image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+`model_file`(m): Model file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdmodel`;
+`params_file`(p): Weight file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdiparams`;
+`use_tensorrt`: Whether to use the TesorRT, default by `True`;