未验证 提交 09659f1a 编写于 作者: Y Yuantao Feng 提交者: GitHub

Refactor benchmark (#148)

* use mean as default for benchmark metric; change result representation;
add --all for benchmarking all configs at a time

* fix comments

* add --model_exclude

* pretty print

* improve benchmark result table header: from band-xpu to xpu-band

* suppress print message

* update benchmark results on CPU-RPI

* add the new benchmark results on the new intel cpu

* fix backend and target setting in benchmark; pre-modify the names of int8 quantized models

* add results on jetson cpu

* add cuda results

* print target and backend when using --all

* add results on Khadas VIM3

* pretty print results

* true pretty print results

* update results in new format

* fix broken backend and target vars

* fix broken backend and target vars

* fix broken backend and target var

* update benchmark results on many devices

* add db results on Ascend-310

* update info on CPU-INTEL

* update usage of the new benchmark script
上级 904e6c81
......@@ -21,43 +21,43 @@ Guidelines:
## Models & Benchmark Results
| Model | Task | Input Size | INTEL-CPU (ms) | RPI-CPU (ms) | JETSON-GPU (ms) | KV3-NPU (ms) | Ascend-310 (ms) | D1-CPU (ms) |
| ------------------------------------------------------- | ----------------------------- | ---------- | -------------- | ------------ | --------------- | ------------ | --------------- | ----------- |
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 6.22 | 12.18 | 4.04 | 1.73 | 86.69 |
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 99.20 | 24.88 | 46.25 | 23.17 | --- |
| [FER](./models/facial_expression_recognition/) | Facial Expression Recognition | 112x112 | 4.43 | 49.86 | 31.07 | 29.80 | 10.12 | --- |
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 168.03 | 56.12 | 29.53 | 8.70 | --- |
| [YOLOX](./models/object_detection_yolox/) | Object Detection | 640x640 | 176.68 | 1496.70 | 388.95 | 420.98 | 29.10 | --- |
| [NanoDet](./models/object_detection_nanodet/) | Object Detection | 416x416 | 157.91 | 220.36 | 64.94 | 116.64 | 35.97 | --- |
| [DB-IC15](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2835.91 | 208.41 | --- | 229.74 | --- |
| [DB-TD500](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2841.71 | 210.51 | --- | 247.29 | --- |
| [CRNN-EN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 50.21 | 234.32 | 196.15 | 125.30 | 101.03 | --- |
| [CRNN-CN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 73.52 | 322.16 | 239.76 | 166.79 | 136.41 | --- |
| [PP-ResNet](./models/image_classification_ppresnet) | Image Classification | 224x224 | 56.05 | 602.58 | 98.64 | 75.45 | 6.99 | --- |
| [MobileNet-V1](./models/image_classification_mobilenet) | Image Classification | 224x224 | 9.04 | 92.25 | 33.18 | 145.66\* | 5.25 | --- |
| [MobileNet-V2](./models/image_classification_mobilenet) | Image Classification | 224x224 | 8.86 | 74.03 | 31.92 | 146.31\* | 5.82 | --- |
| [PP-HumanSeg](./models/human_segmentation_pphumanseg) | Human Segmentation | 192x192 | 19.92 | 105.32 | 67.97 | 74.77 | 7.07 | --- |
| [WeChatQRCode](./models/qrcode_wechatqrcode) | QR Code Detection and Parsing | 100x100 | 7.04 | 37.68 | --- | --- | --- | --- |
| [DaSiamRPN](./models/object_tracking_dasiamrpn) | Object Tracking | 1280x720 | 36.15 | 705.48 | 76.82 | --- | --- | --- |
| [YoutuReID](./models/person_reid_youtureid) | Person Re-Identification | 128x256 | 35.81 | 521.98 | 90.07 | 44.61 | 5.69 | --- |
| [MP-PalmDet](./models/palm_detection_mediapipe) | Palm Detection | 192x192 | 11.09 | 63.79 | 83.20 | 33.81 | 21.59 | --- |
| [MP-HandPose](./models/handpose_estimation_mediapipe) | Hand Pose Estimation | 224x224 | 4.28 | 36.19 | 40.10 | 19.47 | 6.02 | --- |
| Model | Task | Input Size | CPU-INTEL (ms) | CPU-RPI (ms) | GPU-JETSON (ms) | NPU-KV3 (ms) | NPU-Ascend310 (ms) | CPU-D1 (ms) |
| ------------------------------------------------------- | ----------------------------- | ---------- | -------------- | ------------ | --------------- | ------------ | ------------------ | ----------- |
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 0.72 | 5.43 | 12.18 | 4.04 | 2.24 | 86.69 |
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 6.04 | 78.83 | 24.88 | 46.25 | 2.66 | --- |
| [FER](./models/facial_expression_recognition/) | Facial Expression Recognition | 112x112 | 3.16 | 32.53 | 31.07 | 29.80 | 2.19 | --- |
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | 8.63 | 167.70 | 56.12 | 29.53 | 7.63 | --- |
| [YOLOX](./models/object_detection_yolox/) | Object Detection | 640x640 | 141.20 | 1805.87 | 388.95 | 420.98 | 28.59 | --- |
| [NanoDet](./models/object_detection_nanodet/) | Object Detection | 416x416 | 66.03 | 225.10 | 64.94 | 116.64 | 20.62 | --- |
| [DB-IC15](./models/text_detection_db) (EN) | Text Detection | 640x480 | 71.03 | 1862.75 | 208.41 | --- | 17.15 | --- |
| [DB-TD500](./models/text_detection_db) (EN&CN) | Text Detection | 640x480 | 72.31 | 1878.45 | 210.51 | --- | 17.95 | --- |
| [CRNN-EN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 20.16 | 278.11 | 196.15 | 125.30 | --- | --- |
| [CRNN-CN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 23.07 | 297.48 | 239.76 | 166.79 | --- | --- |
| [PP-ResNet](./models/image_classification_ppresnet) | Image Classification | 224x224 | 34.71 | 463.93 | 98.64 | 75.45 | 6.99 | --- |
| [MobileNet-V1](./models/image_classification_mobilenet) | Image Classification | 224x224 | 5.90 | 72.33 | 33.18 | 145.66\* | 5.15 | --- |
| [MobileNet-V2](./models/image_classification_mobilenet) | Image Classification | 224x224 | 5.97 | 66.56 | 31.92 | 146.31\* | 5.41 | --- |
| [PP-HumanSeg](./models/human_segmentation_pphumanseg) | Human Segmentation | 192x192 | 8.81 | 73.13 | 67.97 | 74.77 | 6.94 | --- |
| [WeChatQRCode](./models/qrcode_wechatqrcode) | QR Code Detection and Parsing | 100x100 | 1.29 | 5.71 | --- | --- | --- | --- |
| [DaSiamRPN](./models/object_tracking_dasiamrpn) | Object Tracking | 1280x720 | 29.05 | 712.94 | 76.82 | --- | --- | --- |
| [YoutuReID](./models/person_reid_youtureid) | Person Re-Identification | 128x256 | 30.39 | 625.56 | 90.07 | 44.61 | 5.58 | --- |
| [MP-PalmDet](./models/palm_detection_mediapipe) | Palm Detection | 192x192 | 6.29 | 86.83 | 83.20 | 33.81 | 5.17 | --- |
| [MP-HandPose](./models/handpose_estimation_mediapipe) | Hand Pose Estimation | 224x224 | 4.68 | 43.57 | 40.10 | 19.47 | 6.27 | --- |
\*: Models are quantized in per-channel mode, which run slower than per-tensor quantized models on NPU.
Hardware Setup:
- `INTEL-CPU`: [Intel Core i7-5930K](https://www.intel.com/content/www/us/en/products/sku/82931/intel-core-i75930k-processor-15m-cache-up-to-3-70-ghz/specifications.html) @ 3.50GHz, 6 cores, 12 threads.
- `RPI-CPU`: [Raspberry Pi 4B](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/), Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz.
- `JETSON-GPU`: [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), 128-core NVIDIA Maxwell GPU.
- `KV3-NPU`: [Khadas VIM3](https://www.khadas.com/vim3), 5TOPS Performance. Benchmarks are done using **quantized** models. You will need to compile OpenCV with TIM-VX following [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to run benchmarks. The test results use the `per-tensor` quantization model by default.
- `Ascend-310`: [Ascend 310](https://e.huawei.com/uk/products/cloud-computing-dc/atlas/ascend-310), 22 TOPS@INT8. Benchmarks are done on [Atlas 200 DK AI Developer Kit](https://e.huawei.com/in/products/cloud-computing-dc/atlas/atlas-200). Get the latest OpenCV source code and build following [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to enable CANN backend.
- `D1-CPU`: [Allwinner D1](https://d1.docs.aw-ol.com/en), [Xuantie C906 CPU](https://www.t-head.cn/product/C906?spm=a2ouz.12986968.0.0.7bfc1384auGNPZ) (RISC-V, RVV 0.7.1) @ 1.0GHz, 1 core. YuNet is supported for now. Visit [here](https://github.com/fengyuentau/opencv_zoo_cpp) for more details.
- `CPU-INTEL`: [Intel Core i7-12700K](https://www.intel.com/content/www/us/en/products/sku/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz/specifications.html), 8 Performance-cores (3.60 GHz, turbo up to 4.90 GHz), 4 Efficient-cores (2.70 GHz, turbo up to 3.80 GHz), 20 threads.
- `CPU-RPI`: [Raspberry Pi 4B](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/), Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz.
- `GPU-JETSON`: [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), 128-core NVIDIA Maxwell GPU.
- `NPU-KV3`: [Khadas VIM3](https://www.khadas.com/vim3), 5TOPS Performance. Benchmarks are done using **quantized** models. You will need to compile OpenCV with TIM-VX following [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to run benchmarks. The test results use the `per-tensor` quantization model by default.
- `NPU-Ascend310`: [Ascend 310](https://e.huawei.com/uk/products/cloud-computing-dc/atlas/atlas-200), 22 TOPS @ INT8. Benchmarks are done on [Atlas 200 DK AI Developer Kit](https://e.huawei.com/in/products/cloud-computing-dc/atlas/atlas-200). Get the latest OpenCV source code and build following [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to enable CANN backend.
- `CPU-D1`: [Allwinner D1](https://d1.docs.aw-ol.com/en), [Xuantie C906 CPU](https://www.t-head.cn/product/C906?spm=a2ouz.12986968.0.0.7bfc1384auGNPZ) (RISC-V, RVV 0.7.1) @ 1.0 GHz, 1 core. YuNet is supported for now. Visit [here](https://github.com/fengyuentau/opencv_zoo_cpp) for more details.
***Important Notes***:
- The data under each column of hardware setups on the above table represents the elapsed time of an inference (preprocess, forward and postprocess).
- The time data is the median of 10 runs after some warmup runs. Different metrics may be applied to some specific models.
- The time data is the mean of 10 runs after some warmup runs. Different metrics may be applied to some specific models.
- Batch size is 1 for all benchmark results.
- `---` represents the model is not availble to run on the device.
- View [benchmark/config](./benchmark/config) for more details on benchmarking different models.
......
此差异已折叠。
......@@ -20,6 +20,13 @@ backend_target_pairs = [
[cv.dnn.DNN_BACKEND_TIMVX, cv.dnn.DNN_TARGET_NPU],
[cv.dnn.DNN_BACKEND_CANN, cv.dnn.DNN_TARGET_NPU]
]
backend_target_str_pairs = [
["cv.dnn.DNN_BACKEND_OPENCV", "cv.dnn.DNN_TARGET_CPU"],
["cv.dnn.DNN_BACKEND_CUDA", "cv.dnn.DNN_TARGET_CUDA"],
["cv.dnn.DNN_BACKEND_CUDA", "cv.dnn.DNN_TARGET_CUDA_FP16"],
["cv.dnn.DNN_BACKEND_TIMVX", "cv.dnn.DNN_TARGET_NPU"],
["cv.dnn.DNN_BACKEND_CANN", "cv.dnn.DNN_TARGET_NPU"]
]
parser = argparse.ArgumentParser("Benchmarks for OpenCV Zoo.")
parser.add_argument('--cfg', '-c', type=str,
......@@ -33,9 +40,12 @@ parser.add_argument('--cfg_overwrite_backend_target', type=int, default=-1,
{:d}: TIM-VX + NPU,
{:d}: CANN + NPU
'''.format(*[x for x in range(len(backend_target_pairs))]))
parser.add_argument("--fp32", action="store_true", help="Runs models of float32 precision only.")
parser.add_argument("--fp16", action="store_true", help="Runs models of float16 precision only.")
parser.add_argument("--int8", action="store_true", help="Runs models of int8 precision only.")
parser.add_argument("--cfg_exclude", type=str, help="Configs to be excluded when using --all. Split keywords with colons (:). Not sensitive to upper/lower case.")
parser.add_argument("--model_exclude", type=str, help="Models to be excluded. Split model names with colons (:). Sensitive to upper/lower case.")
parser.add_argument("--fp32", action="store_true", help="Benchmark models of float32 precision only.")
parser.add_argument("--fp16", action="store_true", help="Benchmark models of float16 precision only.")
parser.add_argument("--int8", action="store_true", help="Benchmark models of int8 precision only.")
parser.add_argument("--all", action="store_true", help="Benchmark all models")
args = parser.parse_args()
def build_from_cfg(cfg, registery, key=None, name=None):
......@@ -100,6 +110,7 @@ class Benchmark:
self._target = available_targets[target_id]
self._benchmark_results = dict()
self._benchmark_results_brief = dict()
def setBackendAndTarget(self, backend_id, target_id):
self._backend = backend_id
......@@ -110,56 +121,108 @@ class Benchmark:
for idx, data in enumerate(self._dataloader):
filename, input_data = data[:2]
if filename not in self._benchmark_results:
self._benchmark_results[filename] = dict()
if isinstance(input_data, np.ndarray):
size = [input_data.shape[1], input_data.shape[0]]
else:
size = input_data.getFrameSize()
self._benchmark_results[filename][str(size)] = self._metric.forward(model, *data[1:])
def printResults(self):
for imgName, results in self._benchmark_results.items():
print(' image: {}'.format(imgName))
total_latency = 0
for key, latency in results.items():
total_latency += latency
print(' {}, latency ({}): {:.4f} ms'.format(key, self._metric.getReduction(), latency))
if str(size) not in self._benchmark_results:
self._benchmark_results[str(size)] = dict()
self._benchmark_results[str(size)][filename] = self._metric.forward(model, *data[1:])
if str(size) not in self._benchmark_results_brief:
self._benchmark_results_brief[str(size)] = []
self._benchmark_results_brief[str(size)] += self._benchmark_results[str(size)][filename]
def printResults(self, model_name, model_path):
for imgSize, res in self._benchmark_results_brief.items():
mean, median, minimum = self._metric.getPerfStats(res)
print("{:<10.2f} {:<10.2f} {:<10.2f} {:<12} {} with {}".format(
mean, median, minimum, imgSize, model_name, model_path
))
if __name__ == '__main__':
assert args.cfg.endswith('yaml'), 'Currently support configs of yaml format only.'
with open(args.cfg, 'r') as f:
cfg = yaml.safe_load(f)
# Instantiate benchmark
benchmark = Benchmark(**cfg['Benchmark'])
if args.cfg_overwrite_backend_target >= 0:
backend_id = backend_target_pairs[args.backend_target][0]
target_id = backend_target_pairs[args.backend_target][1]
benchmark.setBackendAndTarget(backend_id, target_id)
# Instantiate model
model_config = cfg['Model']
model_handler, model_paths = MODELS.get(model_config.pop('name'))
_model_paths = []
if args.fp32 or args.fp16 or args.int8:
if args.fp32:
_model_paths += model_paths['fp32']
if args.fp16:
_model_paths += model_paths['fp16']
if args.int8:
_model_paths += model_paths['int8']
cfgs = []
if args.cfg is not None:
assert args.cfg.endswith('yaml'), 'Currently support configs of yaml format only.'
with open(args.cfg, 'r') as f:
cfg = yaml.safe_load(f)
cfgs.append(cfg)
elif args.all:
excludes = []
if args.cfg_exclude is not None:
excludes = args.cfg_exclude.split(":")
for cfg_fname in sorted(os.listdir("config")):
skip_flag = False
for exc in excludes:
if exc.lower() in cfg_fname.lower():
skip_flag = True
if skip_flag:
# print("{} is skipped.".format(cfg_fname))
continue
assert cfg_fname.endswith("yaml"), "Currently support yaml configs only."
with open(os.path.join("config", cfg_fname), "r") as f:
cfg = yaml.safe_load(f)
cfgs.append(cfg)
else:
_model_paths = model_paths['fp32'] + model_paths['fp16'] + model_paths['int8']
for model_path in _model_paths:
model = model_handler(*model_path, **model_config)
# Format model_path
for i in range(len(model_path)):
model_path[i] = model_path[i].split('/')[-1]
print('Benchmarking {} with {}'.format(model.name, model_path))
# Run benchmark
benchmark.run(model)
benchmark.printResults()
raise NotImplementedError("Specify either one config or use flag --all for benchmark.")
print("Benchmarking ...")
if args.all:
backend_target_id = args.cfg_overwrite_backend_target if args.cfg_overwrite_backend_target >= 0 else 0
backend_str = backend_target_str_pairs[backend_target_id][0]
target_str = backend_target_str_pairs[backend_target_id][1]
print("backend={}".format(backend_str))
print("target={}".format(target_str))
print("{:<10} {:<10} {:<10} {:<12} {}".format("mean", "median", "min", "input size", "model"))
for cfg in cfgs:
# Instantiate benchmark
benchmark = Benchmark(**cfg['Benchmark'])
# Set backend and target
if args.cfg_overwrite_backend_target >= 0:
backend_id = backend_target_pairs[args.cfg_overwrite_backend_target][0]
target_id = backend_target_pairs[args.cfg_overwrite_backend_target][1]
benchmark.setBackendAndTarget(backend_id, target_id)
# Instantiate model
model_config = cfg['Model']
model_handler, model_paths = MODELS.get(model_config.pop('name'))
_model_paths = []
if args.fp32 or args.fp16 or args.int8:
if args.fp32:
_model_paths += model_paths['fp32']
if args.fp16:
_model_paths += model_paths['fp16']
if args.int8:
_model_paths += model_paths['int8']
else:
_model_paths = model_paths['fp32'] + model_paths['fp16'] + model_paths['int8']
# filter out excluded models
excludes = []
if args.model_exclude is not None:
excludes = args.model_exclude.split(":")
_model_paths_excluded = []
for model_path in _model_paths:
skip_flag = False
for mp in model_path:
for exc in excludes:
if exc in mp:
skip_flag = True
if skip_flag:
continue
_model_paths_excluded.append(model_path)
_model_paths = _model_paths_excluded
for model_path in _model_paths:
model = model_handler(*model_path, **model_config)
# Format model_path
for i in range(len(model_path)):
model_path[i] = model_path[i].split('/')[-1]
# Run benchmark
benchmark.run(model)
benchmark.printResults(model.name, model_path)
......@@ -6,11 +6,9 @@ Benchmark:
files: ["group.jpg", "concerts.jpg", "dance.jpg"]
sizes: # [[w1, h1], ...], Omit to run at original scale
- [160, 120]
- [640, 480]
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -7,7 +7,6 @@ Benchmark:
metric: # 'sizes' is omitted since this model requires input of fixed size
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -7,7 +7,6 @@ Benchmark:
metric: # 'sizes' is omitted since this model requires input of fixed size
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -10,7 +10,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -10,7 +10,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -7,7 +7,6 @@ Benchmark:
files: ["throw_cup.mp4"]
metric:
type: "Tracking"
reduction: "gmean"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -8,7 +8,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -6,11 +6,9 @@ Benchmark:
files: ["opencv.png", "opencv_zoo.png"]
sizes:
- [100, 100]
- [300, 300]
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -9,7 +9,6 @@ Benchmark:
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -7,7 +7,6 @@ Benchmark:
metric: # 'sizes' is omitted since this model requires input of fixed size
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
......
......@@ -21,4 +21,4 @@ class Base(BaseMetric):
model.infer(img)
self._timer.stop()
return self._getResult()
\ No newline at end of file
return self._timer.getRecords()
......@@ -6,7 +6,6 @@ class BaseMetric:
def __init__(self, **kwargs):
self._warmup = kwargs.pop('warmup', 3)
self._repeat = kwargs.pop('repeat', 10)
self._reduction = kwargs.pop('reduction', 'median')
self._timer = Timer()
......@@ -20,8 +19,8 @@ class BaseMetric:
else:
return records[mid]
def _calcGMean(self, records, drop_largest=3):
''' Return the geometric mean of records after drop the first drop_largest
def _calcMean(self, records, drop_largest=1):
''' Return the mean of records after dropping drop_largest
'''
l = len(records)
if l <= drop_largest:
......@@ -29,17 +28,14 @@ class BaseMetric:
records_sorted = sorted(records, reverse=True)
return sum(records_sorted[drop_largest:]) / (l - drop_largest)
def _getResult(self):
records = self._timer.getRecords()
if self._reduction == 'median':
return self._calcMedian(records)
elif self._reduction == 'gmean':
return self._calcGMean(records)
else:
raise NotImplementedError('Reduction {} is not supported'.format(self._reduction))
def _calcMin(self, records):
return min(records)
def getReduction(self):
return self._reduction
def getPerfStats(self, records):
mean = self._calcMean(records, int(len(records) / 10))
median = self._calcMedian(records)
minimum = self._calcMin(records)
return [mean, median, minimum]
def forward(self, model, *args, **kwargs):
raise NotImplementedError('Not implemented')
\ No newline at end of file
raise NotImplementedError('Not implemented')
......@@ -26,4 +26,4 @@ class Detection(BaseMetric):
model.infer(img)
self._timer.stop()
return self._getResult()
\ No newline at end of file
return self._timer.getRecords()
......@@ -28,4 +28,4 @@ class Recognition(BaseMetric):
model.infer(img, None)
self._timer.stop()
return self._getResult()
\ No newline at end of file
return self._timer.getRecords()
......@@ -8,8 +8,8 @@ class Tracking(BaseMetric):
def __init__(self, **kwargs):
super().__init__(**kwargs)
if self._warmup or self._repeat:
print('warmup and repeat in metric for tracking do not function.')
# if self._warmup or self._repeat:
# print('warmup and repeat in metric for tracking do not function.')
def forward(self, model, *args, **kwargs):
stream, first_frame, rois = args
......@@ -23,4 +23,4 @@ class Tracking(BaseMetric):
model.infer(frame)
self._timer.stop()
return self._getResult()
\ No newline at end of file
return self._timer.getRecords()
......@@ -28,8 +28,8 @@ class MPHandPose:
return self.__class__.__name__
def setBackendAndTarget(self, backendId, targetId):
self._backendId = backendId
self._targetId = targetId
self.backend_id = backendId
self.target_id = targetId
self.model.setPreferableBackend(self.backend_id)
self.model.setPreferableTarget(self.target_id)
......
......@@ -34,8 +34,8 @@ class MobileNet:
return self.__class__.__name__
def setBackendAndTarget(self, backendId, targetId):
self._backendId = backendId
self._targetId = targetId
self.backend_id = backendId
self.target_id = targetId
self.model.setPreferableBackend(self.backend_id)
self.model.setPreferableTarget(self.target_id)
......
......@@ -38,8 +38,8 @@ class NanoDet:
return self.__class__.__name__
def setBackendAndTarget(self, backendId, targetId):
self._backendId = backendId
self._targetId = targetId
self.backend_id = backendId
self.target_id = targetId
self.net.setPreferableBackend(self.backend_id)
self.net.setPreferableTarget(self.target_id)
......
......@@ -24,8 +24,8 @@ class YoloX:
return self.__class__.__name__
def setBackendAndTarget(self, backendId, targetId):
self._backendId = backendId
self._targetId = targetId
self.backendId = backendId
self.targetId = targetId
self.net.setPreferableBackend(self.backendId)
self.net.setPreferableTarget(self.targetId)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册