未验证 提交 e7dbecd2 编写于 作者: L littletomatodonkey 提交者: GitHub

fix predict (#527)

* fix predict

* fix export model

* fix doc
上级 5854b7c6
...@@ -44,7 +44,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench ...@@ -44,7 +44,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench
After the PC and mobile phone are successfully connected, use the following command to start the model evaluation. After the PC and mobile phone are successfully connected, use the following command to start the model evaluation.
``` ```
sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
``` ```
Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows. Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows.
......
...@@ -226,13 +226,13 @@ Firstly, you should export inference model using `tools/export_model.py`. ...@@ -226,13 +226,13 @@ Firstly, you should export inference model using `tools/export_model.py`.
python tools/export_model.py \ python tools/export_model.py \
--model MobileNetV3_large_x1_0 \ --model MobileNetV3_large_x1_0 \
--pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \ --pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \
--output_path ./inference/cls_infer --output_path ./inference
``` ```
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model. Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model.
**Note**: **Note**:
1. File prefix must be assigned in `--output_path`. If `--output_path=./inference/cls_infer`, then three files will be generated in the folder `inference`, they are `cls_infer.pdiparams`, `cls_infer.pdmodel` and `cls_infer.pdiparams.info`. 1. If `--output_path=./inference`, then three files will be generated in the folder `inference`, they are `inference.pdiparams`, `inference.pdmodel` and `inference.pdiparams.info`.
2. In the file `export_model.py:line53`, the `shape` parameter is the shape of the model input image, the default is `224*224`. Please modify it according to the actual situation, as shown below: 2. In the file `export_model.py:line53`, the `shape` parameter is the shape of the model input image, the default is `224*224`. Please modify it according to the actual situation, as shown below:
```python ```python
...@@ -243,20 +243,20 @@ Among them, the `--model` parameter is used to specify the model name, `--pretra ...@@ -243,20 +243,20 @@ Among them, the `--model` parameter is used to specify the model name, `--pretra
54 ]) 54 ])
``` ```
The above command will generate the model structure file (`cls_infer.pdmodel`) and the model weight file (`cls_infer.pdiparams`), and then the inference engine can be used for inference: The above command will generate the model structure file (`inference.pdmodel`) and the model weight file (`inference.pdiparams`), and then the inference engine can be used for inference:
```bash ```bash
python tools/infer/predict.py \ python tools/infer/predict.py \
--image_file image path \ --image_file image path \
--model_file "./inference/cls_infer.pdmodel" \ --model_file "./inference/inference.pdmodel" \
--params_file "./inference/cls_infer.pdiparams" \ --params_file "./inference/inference.pdiparams" \
--use_gpu=True \ --use_gpu=True \
--use_tensorrt=False --use_tensorrt=False
``` ```
Among them: Among them:
+ `image_file`: The path of the image file to be predicted, such as `./test.jpeg`; + `image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+ `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdmodel`; + `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/inference.pdmodel`;
+ `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdiparams`; + `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/inference.pdiparams`;
+ `use_tensorrt`: Whether to use the TesorRT, default by `True`; + `use_tensorrt`: Whether to use the TesorRT, default by `True`;
+ `use_gpu`: Whether to use the GPU, default by `True` + `use_gpu`: Whether to use the GPU, default by `True`
+ `enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored. + `enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.
......
...@@ -45,7 +45,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench ...@@ -45,7 +45,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench
PC端和手机连接成功后,使用下面的命令开始模型评估。 PC端和手机连接成功后,使用下面的命令开始模型评估。
``` ```
sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
``` ```
其中`./benchmark_bin_v8`为benchmark二进制文件路径,`./inference`为所有需要评测的模型的路径,`result_armv8.txt`为保存的结果文件,最后的参数`true`表示在评估之后会首先进行模型优化。最终在当前文件夹下会输出`result_armv8.txt`的评估结果文件,具体信息如下。 其中`./benchmark_bin_v8`为benchmark二进制文件路径,`./inference`为所有需要评测的模型的路径,`result_armv8.txt`为保存的结果文件,最后的参数`true`表示在评估之后会首先进行模型优化。最终在当前文件夹下会输出`result_armv8.txt`的评估结果文件,具体信息如下。
......
...@@ -238,13 +238,13 @@ python tools/infer/infer.py \ ...@@ -238,13 +238,13 @@ python tools/infer/infer.py \
python tools/export_model.py \ python tools/export_model.py \
--model MobileNetV3_large_x1_0 \ --model MobileNetV3_large_x1_0 \
--pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \ --pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \
--output_path ./inference/cls_infer --output_path ./inference
``` ```
其中,参数`--model`用于指定模型名称,`--pretrained_model`用于指定模型文件路径,该路径仍无需包含模型文件后缀名(如[1.3 模型恢复训练](#1.3)),`--output_path`用于指定转换后模型的存储路径。 其中,参数`--model`用于指定模型名称,`--pretrained_model`用于指定模型文件路径,该路径仍无需包含模型文件后缀名(如[1.3 模型恢复训练](#1.3)),`--output_path`用于指定转换后模型的存储路径。
**注意** **注意**
1. `--output_path`中必须指定文件名的前缀,若`--output_path=./inference/cls_infer`,则会在`inference`文件夹下生成`cls_infer.pdiparams``cls_infer.pdmodel``cls_infer.pdiparams.info`文件。 1. `--output_path`表示输出的inference模型文件夹路径,若`--output_path=./inference`,则会在`inference`文件夹下生成`inference.pdiparams``inference.pdmodel``inference.pdiparams.info`文件。
2. 文件`export_model.py:line53`中,`shape`参数为模型输入图像的`shape`,默认为`224*224`,请根据实际情况修改,如下所示: 2. 文件`export_model.py:line53`中,`shape`参数为模型输入图像的`shape`,默认为`224*224`,请根据实际情况修改,如下所示:
```python ```python
50 # Please modify the 'shape' according to actual needs 50 # Please modify the 'shape' according to actual needs
...@@ -254,20 +254,20 @@ python tools/export_model.py \ ...@@ -254,20 +254,20 @@ python tools/export_model.py \
54 ]) 54 ])
``` ```
上述命令将生成模型结构文件(`cls_infer.pdmodel`)和模型权重文件(`cls_infer.pdiparams`),然后可以使用预测引擎进行推理: 上述命令将生成模型结构文件(`inference.pdmodel`)和模型权重文件(`inference.pdiparams`),然后可以使用预测引擎进行推理:
```bash ```bash
python tools/infer/predict.py \ python tools/infer/predict.py \
--image_file 图片路径 \ --image_file 图片路径 \
--model_file "./inference/cls_infer.pdmodel" \ --model_file "./inference/inference.pdmodel" \
--params_file "./inference/cls_infer.pdiparams" \ --params_file "./inference/inference.pdiparams" \
--use_gpu=True \ --use_gpu=True \
--use_tensorrt=False --use_tensorrt=False
``` ```
其中: 其中:
+ `image_file`:待预测的图片文件路径,如 `./test.jpeg` + `image_file`:待预测的图片文件路径,如 `./test.jpeg`
+ `model_file`:模型结构文件路径,如 `./inference/cls_infer.pdmodel` + `model_file`:模型结构文件路径,如 `./inference/inference.pdmodel`
+ `params_file`:模型权重文件路径,如 `./inference/cls_infer.pdiparams` + `params_file`:模型权重文件路径,如 `./inference/inference.pdiparams`
+ `use_tensorrt`:是否使用 TesorRT 预测引擎,默认值:`True` + `use_tensorrt`:是否使用 TesorRT 预测引擎,默认值:`True`
+ `use_gpu`:是否使用 GPU 预测,默认值:`True` + `use_gpu`:是否使用 GPU 预测,默认值:`True`
+ `enable_mkldnn`:是否启用`MKL-DNN`加速,默认为`False`。注意`enable_mkldnn``use_gpu`同时为`True`时,将忽略`enable_mkldnn`,而使用GPU运行。 + `enable_mkldnn`:是否启用`MKL-DNN`加速,默认为`False`。注意`enable_mkldnn``use_gpu`同时为`True`时,将忽略`enable_mkldnn`,而使用GPU运行。
......
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle import paddle
import numpy as np import numpy as np
......
...@@ -33,8 +33,7 @@ def parse_args(): ...@@ -33,8 +33,7 @@ def parse_args():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model", type=str) parser.add_argument("-m", "--model", type=str)
parser.add_argument("-p", "--pretrained_model", type=str) parser.add_argument("-p", "--pretrained_model", type=str)
parser.add_argument( parser.add_argument("-o", "--output_path", type=str, default="./inference")
"-o", "--output_path", type=str, default="./inference/cls_infer")
parser.add_argument("--class_dim", type=int, default=1000) parser.add_argument("--class_dim", type=int, default=1000)
parser.add_argument("--load_static_weights", type=str2bool, default=False) parser.add_argument("--load_static_weights", type=str2bool, default=False)
parser.add_argument("--img_size", type=int, default=224) parser.add_argument("--img_size", type=int, default=224)
...@@ -73,7 +72,7 @@ def main(): ...@@ -73,7 +72,7 @@ def main():
paddle.static.InputSpec( paddle.static.InputSpec(
shape=[None, 3, args.img_size, args.img_size], dtype='float32') shape=[None, 3, args.img_size, args.img_size], dtype='float32')
]) ])
paddle.jit.save(model, args.output_path) paddle.jit.save(model, os.path.join(args.output_path, "inference"))
if __name__ == "__main__": if __name__ == "__main__":
......
...@@ -14,19 +14,21 @@ ...@@ -14,19 +14,21 @@
import numpy as np import numpy as np
import cv2 import cv2
import utils
import shutil import shutil
import os import os
import sys import sys
import paddle
import paddle.nn.functional as F
__dir__ = os.path.dirname(os.path.abspath(__file__)) __dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__) sys.path.append(__dir__)
sys.path.append(os.path.abspath(os.path.join(__dir__, '../..'))) sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
from ppcls.utils.save_load import load_dygraph_pretrain from ppcls.utils.save_load import load_dygraph_pretrain
from ppcls.modeling import architectures from ppcls.modeling import architectures
import utils
import paddle from utils import get_image_list
import paddle.nn.functional as F
def postprocess(outputs, topk=5): def postprocess(outputs, topk=5):
...@@ -36,23 +38,6 @@ def postprocess(outputs, topk=5): ...@@ -36,23 +38,6 @@ def postprocess(outputs, topk=5):
return zip(index, prob[index]) return zip(index, prob[index])
def get_image_list(img_file):
imgs_lists = []
if img_file is None or not os.path.exists(img_file):
raise Exception("not found any img file in {}".format(img_file))
img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp']
if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end:
imgs_lists.append(img_file)
elif os.path.isdir(img_file):
for single_file in os.listdir(img_file):
if single_file.split('.')[-1] in img_end:
imgs_lists.append(os.path.join(img_file, single_file))
if len(imgs_lists) == 0:
raise Exception("not found any img file in {}".format(img_file))
return imgs_lists
def save_prelabel_results(class_id, input_filepath, output_idr): def save_prelabel_results(class_id, input_filepath, output_idr):
output_dir = os.path.join(output_idr, str(class_id)) output_dir = os.path.join(output_idr, str(class_id))
if not os.path.isdir(output_dir): if not os.path.isdir(output_dir):
......
...@@ -12,13 +12,15 @@ ...@@ -12,13 +12,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import sys
sys.path.insert(0, ".")
import tools.infer.utils as utils
import numpy as np import numpy as np
import cv2 import cv2
import time import time
import sys
sys.path.insert(0, ".")
import tools.infer.utils as utils
from tools.infer.utils import get_image_list
def predict(args, predictor): def predict(args, predictor):
input_names = predictor.get_input_names() input_names = predictor.get_input_names()
...@@ -32,12 +34,18 @@ def predict(args, predictor): ...@@ -32,12 +34,18 @@ def predict(args, predictor):
if not args.enable_benchmark: if not args.enable_benchmark:
# for PaddleHubServing # for PaddleHubServing
if args.hubserving: if args.hubserving:
img = args.image_file img_list = [args.image_file]
# for predict only # for predict only
else: else:
img = cv2.imread(args.image_file)[:, :, ::-1] img_list = get_image_list(args.image_file)
for idx, img_name in enumerate(img_list):
if not args.hubserving:
img = cv2.imread(img_name)[:, :, ::-1]
assert img is not None, "Error in loading image: {}".format( assert img is not None, "Error in loading image: {}".format(
args.image_file) img_name)
else:
img = img_name
inputs = utils.preprocess(img, args) inputs = utils.preprocess(img, args)
inputs = np.expand_dims( inputs = np.expand_dims(
inputs, axis=0).repeat( inputs, axis=0).repeat(
...@@ -50,7 +58,7 @@ def predict(args, predictor): ...@@ -50,7 +58,7 @@ def predict(args, predictor):
classes, scores = utils.postprocess(output, args) classes, scores = utils.postprocess(output, args)
if args.hubserving: if args.hubserving:
return classes, scores return classes, scores
print("Current image file: {}".format(args.image_file)) print("Current image file: {}".format(img_name))
print("\ttop-1 class: {0}".format(classes[0])) print("\ttop-1 class: {0}".format(classes[0]))
print("\ttop-1 score: {0}".format(scores[0])) print("\ttop-1 score: {0}".format(scores[0]))
else: else:
......
...@@ -12,9 +12,11 @@ ...@@ -12,9 +12,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
import argparse import argparse
import cv2 import cv2
import numpy as np import numpy as np
from paddle.inference import Config from paddle.inference import Config
from paddle.inference import create_predictor from paddle.inference import create_predictor
...@@ -125,6 +127,23 @@ def postprocess(output, args): ...@@ -125,6 +127,23 @@ def postprocess(output, args):
return classes, scores return classes, scores
def get_image_list(img_file):
imgs_lists = []
if img_file is None or not os.path.exists(img_file):
raise Exception("not found any img file in {}".format(img_file))
img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp']
if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end:
imgs_lists.append(img_file)
elif os.path.isdir(img_file):
for single_file in os.listdir(img_file):
if single_file.split('.')[-1] in img_end:
imgs_lists.append(os.path.join(img_file, single_file))
if len(imgs_lists) == 0:
raise Exception("not found any img file in {}".format(img_file))
return imgs_lists
class ResizeImage(object): class ResizeImage(object):
def __init__(self, resize_short=None): def __init__(self, resize_short=None):
self.resize_short = resize_short self.resize_short = resize_short
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册