diff --git a/tools/lite/benchmark.sh b/deploy/lite/benchmark/benchmark.sh similarity index 100% rename from tools/lite/benchmark.sh rename to deploy/lite/benchmark/benchmark.sh diff --git a/docs/en/extension/paddle_mobile_inference_en.md b/docs/en/extension/paddle_mobile_inference_en.md index bf9252a01a796e5076f4e180c47f165f5f94ff27..44beba110a39160a5cc59bbbb6db4d91d4c01809 100644 --- a/docs/en/extension/paddle_mobile_inference_en.md +++ b/docs/en/extension/paddle_mobile_inference_en.md @@ -44,7 +44,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench After the PC and mobile phone are successfully connected, use the following command to start the model evaluation. ``` -sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true +sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true ``` Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows. diff --git a/docs/en/tutorials/getting_started_en.md b/docs/en/tutorials/getting_started_en.md index db780de21b61ca5e4f4ad2e03ca984d9542baa78..5cdfd409391922b7374dc84451f9b1f8a33c1162 100644 --- a/docs/en/tutorials/getting_started_en.md +++ b/docs/en/tutorials/getting_started_en.md @@ -226,13 +226,13 @@ Firstly, you should export inference model using `tools/export_model.py`. python tools/export_model.py \ --model MobileNetV3_large_x1_0 \ --pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \ - --output_path ./inference/cls_infer + --output_path ./inference ``` Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model. **Note**: -1. File prefix must be assigned in `--output_path`. If `--output_path=./inference/cls_infer`, then three files will be generated in the folder `inference`, they are `cls_infer.pdiparams`, `cls_infer.pdmodel` and `cls_infer.pdiparams.info`. +1. If `--output_path=./inference`, then three files will be generated in the folder `inference`, they are `inference.pdiparams`, `inference.pdmodel` and `inference.pdiparams.info`. 2. In the file `export_model.py:line53`, the `shape` parameter is the shape of the model input image, the default is `224*224`. Please modify it according to the actual situation, as shown below: ```python @@ -243,20 +243,20 @@ Among them, the `--model` parameter is used to specify the model name, `--pretra 54 ]) ``` -The above command will generate the model structure file (`cls_infer.pdmodel`) and the model weight file (`cls_infer.pdiparams`), and then the inference engine can be used for inference: +The above command will generate the model structure file (`inference.pdmodel`) and the model weight file (`inference.pdiparams`), and then the inference engine can be used for inference: ```bash python tools/infer/predict.py \ --image_file image path \ - --model_file "./inference/cls_infer.pdmodel" \ - --params_file "./inference/cls_infer.pdiparams" \ + --model_file "./inference/inference.pdmodel" \ + --params_file "./inference/inference.pdiparams" \ --use_gpu=True \ --use_tensorrt=False ``` Among them: + `image_file`: The path of the image file to be predicted, such as `./test.jpeg`; -+ `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdmodel`; -+ `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdiparams`; ++ `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/inference.pdmodel`; ++ `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/inference.pdiparams`; + `use_tensorrt`: Whether to use the TesorRT, default by `True`; + `use_gpu`: Whether to use the GPU, default by `True` + `enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored. diff --git a/docs/zh_CN/extension/paddle_mobile_inference.md b/docs/zh_CN/extension/paddle_mobile_inference.md index 833231f736ffc19cb3b705025d7511dc8eba6bc4..529df90040b65ce23a5e024f9a07f54b51f68391 100644 --- a/docs/zh_CN/extension/paddle_mobile_inference.md +++ b/docs/zh_CN/extension/paddle_mobile_inference.md @@ -45,7 +45,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench PC端和手机连接成功后,使用下面的命令开始模型评估。 ``` -sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true +sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true ``` 其中`./benchmark_bin_v8`为benchmark二进制文件路径,`./inference`为所有需要评测的模型的路径,`result_armv8.txt`为保存的结果文件,最后的参数`true`表示在评估之后会首先进行模型优化。最终在当前文件夹下会输出`result_armv8.txt`的评估结果文件,具体信息如下。 diff --git a/docs/zh_CN/tutorials/getting_started.md b/docs/zh_CN/tutorials/getting_started.md index acbc53a083c120ff361b03fe031997bad11ae541..e3a7464e4d7adb8bf7d62af4e13c8bfe23187311 100644 --- a/docs/zh_CN/tutorials/getting_started.md +++ b/docs/zh_CN/tutorials/getting_started.md @@ -238,13 +238,13 @@ python tools/infer/infer.py \ python tools/export_model.py \ --model MobileNetV3_large_x1_0 \ --pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \ - --output_path ./inference/cls_infer + --output_path ./inference ``` 其中,参数`--model`用于指定模型名称,`--pretrained_model`用于指定模型文件路径,该路径仍无需包含模型文件后缀名(如[1.3 模型恢复训练](#1.3)),`--output_path`用于指定转换后模型的存储路径。 **注意**: -1. `--output_path`中必须指定文件名的前缀,若`--output_path=./inference/cls_infer`,则会在`inference`文件夹下生成`cls_infer.pdiparams`、`cls_infer.pdmodel`和`cls_infer.pdiparams.info`文件。 +1. `--output_path`表示输出的inference模型文件夹路径,若`--output_path=./inference`,则会在`inference`文件夹下生成`inference.pdiparams`、`inference.pdmodel`和`inference.pdiparams.info`文件。 2. 文件`export_model.py:line53`中,`shape`参数为模型输入图像的`shape`,默认为`224*224`,请根据实际情况修改,如下所示: ```python 50 # Please modify the 'shape' according to actual needs @@ -254,20 +254,20 @@ python tools/export_model.py \ 54 ]) ``` -上述命令将生成模型结构文件(`cls_infer.pdmodel`)和模型权重文件(`cls_infer.pdiparams`),然后可以使用预测引擎进行推理: +上述命令将生成模型结构文件(`inference.pdmodel`)和模型权重文件(`inference.pdiparams`),然后可以使用预测引擎进行推理: ```bash python tools/infer/predict.py \ --image_file 图片路径 \ - --model_file "./inference/cls_infer.pdmodel" \ - --params_file "./inference/cls_infer.pdiparams" \ + --model_file "./inference/inference.pdmodel" \ + --params_file "./inference/inference.pdiparams" \ --use_gpu=True \ --use_tensorrt=False ``` 其中: + `image_file`:待预测的图片文件路径,如 `./test.jpeg` -+ `model_file`:模型结构文件路径,如 `./inference/cls_infer.pdmodel` -+ `params_file`:模型权重文件路径,如 `./inference/cls_infer.pdiparams` ++ `model_file`:模型结构文件路径,如 `./inference/inference.pdmodel` ++ `params_file`:模型权重文件路径,如 `./inference/inference.pdiparams` + `use_tensorrt`:是否使用 TesorRT 预测引擎,默认值:`True` + `use_gpu`:是否使用 GPU 预测,默认值:`True` + `enable_mkldnn`:是否启用`MKL-DNN`加速,默认为`False`。注意`enable_mkldnn`与`use_gpu`同时为`True`时,将忽略`enable_mkldnn`,而使用GPU运行。 diff --git a/tools/ema.py b/tools/ema.py index 4f4596e1f08e6154bea11a86eb2a1fd31438d9f3..e41f472d6a1e448e76b922a718befbaa23d6ab51 100644 --- a/tools/ema.py +++ b/tools/ema.py @@ -1,3 +1,17 @@ +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import paddle import numpy as np diff --git a/tools/export_model.py b/tools/export_model.py index 5d2517fbccffe73229c37e982c4a495bb58a8d14..51b4fe2b6b8737fa2cd826b1708fc8c5d495cf79 100644 --- a/tools/export_model.py +++ b/tools/export_model.py @@ -33,8 +33,7 @@ def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("-m", "--model", type=str) parser.add_argument("-p", "--pretrained_model", type=str) - parser.add_argument( - "-o", "--output_path", type=str, default="./inference/cls_infer") + parser.add_argument("-o", "--output_path", type=str, default="./inference") parser.add_argument("--class_dim", type=int, default=1000) parser.add_argument("--load_static_weights", type=str2bool, default=False) parser.add_argument("--img_size", type=int, default=224) @@ -73,7 +72,7 @@ def main(): paddle.static.InputSpec( shape=[None, 3, args.img_size, args.img_size], dtype='float32') ]) - paddle.jit.save(model, args.output_path) + paddle.jit.save(model, os.path.join(args.output_path, "inference")) if __name__ == "__main__": diff --git a/tools/infer/infer.py b/tools/infer/infer.py index b4bf43e32671017412d250f64cc73d5d2a5dbd5d..67da0e31e267cf7f55d8531b577f56eafb2365e7 100644 --- a/tools/infer/infer.py +++ b/tools/infer/infer.py @@ -14,19 +14,21 @@ import numpy as np import cv2 -import utils import shutil import os import sys + +import paddle +import paddle.nn.functional as F + __dir__ = os.path.dirname(os.path.abspath(__file__)) sys.path.append(__dir__) sys.path.append(os.path.abspath(os.path.join(__dir__, '../..'))) from ppcls.utils.save_load import load_dygraph_pretrain from ppcls.modeling import architectures - -import paddle -import paddle.nn.functional as F +import utils +from utils import get_image_list def postprocess(outputs, topk=5): @@ -36,23 +38,6 @@ def postprocess(outputs, topk=5): return zip(index, prob[index]) -def get_image_list(img_file): - imgs_lists = [] - if img_file is None or not os.path.exists(img_file): - raise Exception("not found any img file in {}".format(img_file)) - - img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp'] - if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end: - imgs_lists.append(img_file) - elif os.path.isdir(img_file): - for single_file in os.listdir(img_file): - if single_file.split('.')[-1] in img_end: - imgs_lists.append(os.path.join(img_file, single_file)) - if len(imgs_lists) == 0: - raise Exception("not found any img file in {}".format(img_file)) - return imgs_lists - - def save_prelabel_results(class_id, input_filepath, output_idr): output_dir = os.path.join(output_idr, str(class_id)) if not os.path.isdir(output_dir): diff --git a/tools/infer/predict.py b/tools/infer/predict.py index 885bb3379ee977355d7e421a19f24403cc6d4cf7..fdcbb6a942a0e09464a9f0953f6fe2814f000512 100644 --- a/tools/infer/predict.py +++ b/tools/infer/predict.py @@ -12,13 +12,15 @@ # See the License for the specific language governing permissions and # limitations under the License. -import sys -sys.path.insert(0, ".") -import tools.infer.utils as utils import numpy as np import cv2 import time +import sys +sys.path.insert(0, ".") +import tools.infer.utils as utils +from tools.infer.utils import get_image_list + def predict(args, predictor): input_names = predictor.get_input_names() @@ -32,27 +34,33 @@ def predict(args, predictor): if not args.enable_benchmark: # for PaddleHubServing if args.hubserving: - img = args.image_file + img_list = [args.image_file] # for predict only else: - img = cv2.imread(args.image_file)[:, :, ::-1] - assert img is not None, "Error in loading image: {}".format( - args.image_file) - inputs = utils.preprocess(img, args) - inputs = np.expand_dims( - inputs, axis=0).repeat( - args.batch_size, axis=0).copy() - input_tensor.copy_from_cpu(inputs) - - predictor.run() - - output = output_tensor.copy_to_cpu() - classes, scores = utils.postprocess(output, args) - if args.hubserving: - return classes, scores - print("Current image file: {}".format(args.image_file)) - print("\ttop-1 class: {0}".format(classes[0])) - print("\ttop-1 score: {0}".format(scores[0])) + img_list = get_image_list(args.image_file) + + for idx, img_name in enumerate(img_list): + if not args.hubserving: + img = cv2.imread(img_name)[:, :, ::-1] + assert img is not None, "Error in loading image: {}".format( + img_name) + else: + img = img_name + inputs = utils.preprocess(img, args) + inputs = np.expand_dims( + inputs, axis=0).repeat( + args.batch_size, axis=0).copy() + input_tensor.copy_from_cpu(inputs) + + predictor.run() + + output = output_tensor.copy_to_cpu() + classes, scores = utils.postprocess(output, args) + if args.hubserving: + return classes, scores + print("Current image file: {}".format(img_name)) + print("\ttop-1 class: {0}".format(classes[0])) + print("\ttop-1 score: {0}".format(scores[0])) else: for i in range(0, test_num + 10): inputs = np.random.rand(args.batch_size, 3, 224, diff --git a/tools/infer/utils.py b/tools/infer/utils.py index 9ea5e35422c57db49391a2e0551376068929e782..69fcd05a34faa0011902d29d6c94300b6dc0ee66 100644 --- a/tools/infer/utils.py +++ b/tools/infer/utils.py @@ -12,9 +12,11 @@ # See the License for the specific language governing permissions and # limitations under the License. +import os import argparse import cv2 import numpy as np + from paddle.inference import Config from paddle.inference import create_predictor @@ -125,6 +127,23 @@ def postprocess(output, args): return classes, scores +def get_image_list(img_file): + imgs_lists = [] + if img_file is None or not os.path.exists(img_file): + raise Exception("not found any img file in {}".format(img_file)) + + img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp'] + if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end: + imgs_lists.append(img_file) + elif os.path.isdir(img_file): + for single_file in os.listdir(img_file): + if single_file.split('.')[-1] in img_end: + imgs_lists.append(os.path.join(img_file, single_file)) + if len(imgs_lists) == 0: + raise Exception("not found any img file in {}".format(img_file)) + return imgs_lists + + class ResizeImage(object): def __init__(self, resize_short=None): self.resize_short = resize_short