@@ -153,13 +153,7 @@ result = clas.predict(infer_imgs)
print(next(result))
```
**Note**: The result returned by model.predict() is a `generator`, so you need to use the `next()` function to call it or `for loop` to loop it. And it will predict with batch_size size batch and return the prediction results when called. The default batch_size is 1, and you also specify the batch_size when instantiating, such as model = paddleclas.PaddleClas(model_name="PPHGNet_small", batch_size=2). The result of demo above:
**Note**: The result returned by model.predict() is a `generator`, so you need to use the `next()` function to call it or `for loop` to loop it. And it will predict with batch_size size batch and return the prediction results when called. The default batch_size is 1, and you also specify the batch_size when instantiating, such as `model = paddleclas.PaddleClas(model_name="PPHGNet_small", batch_size=2)`.
<aname="3"></a>
...
...
@@ -265,7 +259,7 @@ The results:
**Note**:
* Among the above command, argument `-o Global.pretrained_model="output/PPLCNet_x1_0/best_model"` specify the path of the best model weight file. You can specify other path if needed.
* Among the above command, argument `-o Global.pretrained_model="output/PPHGNet_small/best_model"` specify the path of the best model weight file. You can specify other path if needed.
* The default test image is `docs/images/inference_deployment/whl_demo.jpg` ,And you can test other image, only need to specify the argument `-o Infer.infer_imgs=path_to_test_image`.
...
...
@@ -410,4 +404,3 @@ PaddleClas provides an example of how to deploy on mobile by Paddle-Lite. Please
Paddle2ONNX support convert Paddle Inference model to ONNX model. And you can deploy with ONNX model on different inference engine, such as TensorRT, OpenVINO, MNN/TNN, NCNN and so on. About Paddle2ONNX details, please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
PaddleClas provides an example of how to convert Paddle Inference model to ONNX model by paddle2onnx toolkit and predict by ONNX model. You can refer to [paddle2onnx](../../../deploy/paddle2onnx/readme_en.md) for deployment details.